Thought leadership from the most innovative tech companies, all in one place.

Beginner’s Guide to OpenAI’s GPT-3.5-Turbo Model

From GPT-3 to GPT-3.5-Turbo: Understanding the Latest Upgrades in OpenAI’s Language Model API.


As you may already know, OpenAI has recently made their GPT-3.5-Turbo Model API accessible to the public. This completely changes the dynamics of API usage, so be prepared to see chatbots implemented everywhere in the coming days.

In this tutorial, we will examine the reasons why it is preferable to use the GPT-3.5-Turbo Model over the previous GPT-3 version. We will go over the changes that have been made, discuss the new use cases, and provide code snippet to demonstrate how to implement it.

If coding isn't your thing, feel free to skip the 'How to Use' section.

What is the GPT-3.5-Turbo Model

If you've landed on this tutorial, chances are that you're already familiar with ChatGPT or OpenAI APIs, so I won't go into the details of their history.

The GPT-3.5-Turbo Model is OpenAI's latest and most advanced language model, which powers the popular ChatGPT. Thanks to its capabilities, anyone now has the theoretical opportunity to build their own chatbot that can be just as powerful as ChatGPT.

The new GPT-3.5-Turbo model can now accept a series of messages as input, unlike the previous version that only allowed a single text prompt. This capability unlocks some interesting features, such as the ability to store prior responses or query with a predefined set of instructions with context. This is likely to improve the generated response, and I will demonstrate these in later section.

Comparison between GPT-3.5-Turbo vs GPT-3

The GPT-3.5-Turbo Model is a superior option compared to the GPT-3 Model, as it offers better performance across all aspects while being 10 times cheaper per token. Moreover, you can still perform single-turn tasks with only a minor adjustment to the original query prompt, while taking advantage of the discounted price offered by the GPT-3.5-Turbo Model.

The following example illustrates the difference between the message-style query used by the new GPT-3.5-Turbo Model and the old prompt-style query. With the new model, we can incorporate more context and even prior responses into the conversation.

// GPT-3 model prompt example
const GPT3Prompt = `Give an example of How to send an openai api request in JavaScript`;

// GPT-3.5-Turbo model prompt example
const GPT35TurboMessage = [
  { role: "system", content: `You are a JavaScript developer.` },
    role: "user",
    content: "Which npm package is best of openai api development?",
    role: "assistant",
    content: "The 'openai' Node.js library.",
  { role: "user", content: "How to send an openai api request" },

Unless for some highly specialized use cases, it is advisable for everyone to adopt the new model for all future implementations.

How to Use the GPT-3.5-Turbo Model

The process of upgrading to the new GPT-3.5-Turbo Model API is simple and straightforward. In this tutorial, I will demonstrate how to do so using Node.js. However, the same concept applies to other programming languages of your choice.

Below is a code snippet that demonstrates how to upgrade to the new GPT-3.5-Turbo Model API using Node.js, with examples for both the previous GPT-3 method and the new GPT-3.5-Turbo method.

Before proceeding, make sure you have acquired your OpenAI API key and set up your project accordingly. For more information, you can refer to my tutorial "Getting Started with OpenAI API".

At the time of writing this tutorial, the createChatCompletion method has not been officially released in the openai-node library. However, upon digging through their GitHub code, it appears that the method is indeed available for use in the latest version.

import dotenv from "dotenv";
import { Configuration, OpenAIApi } from "openai";


// Creating an instance of OpenAIApi with API key from the environment variables
const openai = new OpenAIApi(
  new Configuration({ apiKey: process.env.OPENAI_KEY })

const topic = "JavaScript";
const question = "How to send an openai api request";

// Setting values for the prompt and message to be used in the GPT-3 and GPT-3.5-Turbo
const GPT3Prompt = `Give an example of ${question} in ${topic}`;
const GPT35TurboMessage = [
  { role: "system", content: `You are a ${topic} developer.` },
    role: "user",
    content: "Which npm package is best of openai api development?",
    role: "assistant",
    content: "The 'openai' Node.js library.",
  { role: "user", content: question },

// Function to generate text using GPT-3 model
let GPT3 = async (prompt) => {
  const response = await openai.createCompletion({
    model: "text-davinci-003",
    max_tokens: 500,

let GPT35Turbo = async (message) => {
  const response = await openai.createChatCompletion({
    model: "gpt-3.5-turbo",
    messages: message,


// Log the generated text from the GPT-3 and GPT-3.5-Turbo models to the console
console.log("### I'm GPT-3. ####", await GPT3(GPT3Prompt));
console.log("### I'm GPT-3.5-TURBO. ####", await GPT35Turbo(GPT35TurboMessage));

/* ****** Response Section ******/

// ### I'm GPT-3.

//Include the OpenAI JavaScript Client Library
const openAiClient = require("openai-client");

//Provide your OpenAI API key
let apiKey = "YOUR_API_KEY";

//Set up the OpenAI client with your API key
const client = openAiClient(apiKey);

//Create the request object with your desired parameters
let requestOptions = {
  engine: "davinci",
  prompt: "Are you feeling okay?",
  max_tokens: 25,

//Send the request to OpenAI
client.request(requestOptions, function (err, result) {
  if (err) {
  } else {
    //The response from OpenAI will be in the 'result' object

// ### I'm GPT-3.5-TURBO.

To send an OpenAI API request using the 'openai' Node.js library, you would follow these general steps:

  1. First, you need to install the 'openai' package from npm using the command: npm install openai
  2. Next, you would need to import the library into your file using const openai = require('openai');
  3. Authenticate your API key by setting it as an environment variable, or passing it as a parameter to the openai.api_key property.
  4. Use one of the various API methods provided by the openai library to send a request to OpenAI. For example, to use the 'Completion' endpoint, you could use the following code:
// Set the API key
openai.api_key = "YOUR_API_KEY";

// Send a 'Completion' API request
const prompt = "Hello, my name is";
const requestOptions = {
  temperature: 0.5,
  max_tokens: 5,
  n: 1,
  stream: false,
  stop: "n",

  .then((response) => {
  .catch((error) => {

In this example, we are using the openai.completions.create() method to send a request to the 'Completion' endpoint, which generates text completion based on the provided prompt. We are then logging the response from the API to the console. Note that the requestOptions object contains various parameters that can be used to customize the request.

  • The GPT3Prompt variable is set to a string that includes the question and topic variables and will be used as input to the GPT-3 model.
  • The GPT35TurboMessage variable is set to an array of objects that simulate a conversation between a user, an assistant, and a system, and will be used as input to the GPT-3.5-Turbo model.
  • Two functions are defined to generate text using the GPT-3 and GPT-3.5-Turbo models: GPT3 and GPT35Turbo, respectively.
  • The GPT3 function uses the openai.createCompletion() method to generate text based on the GPT3Prompt variable and the text-davinci-003 GPT-3 model.
  • The GPT35Turbo function uses the openai.createChatCompletion() method to generate text based on the GPT35TurboMessage variable and the gpt-3.5-turbo GPT-3.5-Turbo model.
  • Finally, the generated text from the GPT-3 and GPT-3.5-Turbo models is logged to the console using console.log().

The output produced by the GPT-3.5-Turbo model is significantly better than that of the GPT-3 model because I provided more context to the request. In contrast, the GPT-3 model went off on its own path and suggested a non-existent library when generating the response.

As you can see from the code, upgrading to the new GPT-3.5-Turbo model doesn't require many changes. Additionally, you have the option to include more context in the messages to better tailor your request to the task at hand.

Best Practices for Using the GPT-3.5-Turbo Model

It's too early to provide best practices since the new model was released only a couple of days ago. However, I can offer a few suggestions to guide you in the right direction.

  1. Always use the latest model available.
  2. Multi-turn conversations usually produce better results.
  3. System messages can help establish desired behavior.
  4. Both assistant and user messages provide additional context.
  5. Specify the desired output format in the request.
  6. The models do not retain memory of past requests, so include all relevant context in message exchanges.
  7. All additional parameters, such as temperature and max_tokens, still function properly as before.

Some Thoughts on Potential Use Cases

Given the rapid evolution, decreasing costs, and expanding options available for various AI technologies, I strongly urge any mid to large-sized business to start considering their AI strategy before it becomes too late.

I think any business that tries to replace its entire workforce with AI doesn't really understand the technology.

While multi-turn conversations can be useful, it's important to note that there are limitations to the amount of information that can be stored and processed due to memory constraints

AI should be viewed as a tool that empowers human workers, rather than as a substitute for them.

Let's explore a couple of potential use cases that can benefit your business.

Customer Support

I know that customer support is an area where many businesses face challenges.

  • Customers may not be happy with the quality of service they receive, or they may have to wait too long to speak with a representative.
  • Employees may feel overworked and not have the resources they need to do their jobs properly.
  • Management may be dissatisfied with the cost of providing customer support, which can negatively impact the company's bottom line.

Incorporating AI into your customer support workflow can offer numerous benefits. For example,

  • Text-based queries commonly received through channels such as online chat, email, and help desk tickets can be efficiently managed by employing a customized chatbot with a comprehensive understanding of your products and services, as well as common troubleshooting steps. This can help you quickly filter and resolve basic customer inquiries, potentially saving valuable time and resources.
  • Voice-based queries can be addressed using AI tools such as OpenAI's Whisper, which can transcribe incoming voice messages into text for processing by the same chatbot. You can also leverage text-to-speech services like Amazon Polly to enhance the overall customer experience.

Legal Service

AI can be a valuable tool for law firms and other professional service entities. For instance,

  • Initial consultations, similar to customer support cases, can be handled by AI-powered chatbots for basic inquiries that don't require complex analysis, freeing up time for human staff to focus on more complex issues.
  • Using natural language processing AI such as GPT models can help streamline the process of reading through complex legal documentation such as contracts, legal briefs, court filings, laws and regulations, and legal opinions. This allows for more efficient and accurate analysis, as these models can parse through thousands of lines of text in just a fraction of a second.

Note that OpenAI no longer utilizes the data sent via the API for training models. This change was made to address privacy concerns raised by many companies.

Wrap Up

I believe that most businesses can benefit from implementing AI technology to some extent, depending on their budget and technical expertise. With the increasing availability and affordability of AI solutions, we can expect to see a growing number of companies offering tailored AI services to businesses of all sizes in the near future.

Continue Learning