Creating a command line AI chat app with Node.js and Google Gemini

A guide on how to create a command line AI chat app with Node.js and Gemini

Tech is constantly evolving and moving forward, and that fact has rarely proven more true than it has since ChatGPT launched in November of 2022.

Since then, AI has dominated the tech industry, as some companies have created similar services to that of ChatGPT, while countless others have put the power of machine learning and LLMs to work to move their products forward.

It’s an exciting time, but it can be a bit terrifying as a developer. AI has so much potential and is so clearly going to be a huge factor in the future of tech, and it can feel daunting to add it to the endless plate of new things to learn as a programmer.

So I decided to do some of the legwork by digging into an LLM (Large Language Model)— Google’s Gemini, in this case — and put together a basic application that allows you to chat with an AI from a command line in a Node app. Google has considerable documentation for their Gemini API, and there are a lot of places to go with it from here, but this should get you up and running!

Project setup

The first thing that you’ll need to do is get a couple of packages installed. Open a new directory and run npm init to create a package.json file in your project. The default options are all fine.

From there, you’ll need to install a few packages. This command will do the trick:

npm i @google/generative-ai dotenv readline

Now that you have the appropriate packages, you’ll need to head to the Gemini documentation and create an API key.

Once you have your key, create a .env file in the root of your project and save the key as API_KEY

API_KEY=[KEY]

Make sure not to include the brackets.

Building the app

Now we’re ready to code! Create a main.js file in your project, and copy the following code into it:

const { GoogleGenerativeAI } = require("@google/generative-ai");
const readline = require("readline");
const dotenv = require("dotenv");
dotenv.config();

const rl = readline.createInterface({
  input: process.stdin,
  output: process.stdout,
});

const genAI = new GoogleGenerativeAI(process.env.API_KEY);
let model;
let chat;

Let’s walk through this before we move on. First of all, we’re importing the GoogleGenerativeAI module from Google. This will handle all of our connections to the Gemini API. Next, we’re importing readline — we’ll be using this to prompt for text input at the command line. Lastly, dotenv allows you to read your API_KEY from your environment file.

From there, we create a new instance of the GoogleGenerativeAI object using our API key and instantiate a couple of variables, which we’ll use in a moment. Now to create the actual connection!

Add this code under what we currently have:

function connectToGemini() {
  model = genAI.getGenerativeModel({ model: "gemini-pro" });
  chat = model.startChat({});
}

connectToGemini();

So now we have values assigned to our previously declared global variables. model is being assigned to a generative model, specifically the gemini-pro model. I won’t get too in-depth with different available models in this post, but this one is great for text and chat uses. If you want to learn more about the other models that can be used (for processing image input, for instance) Google has them documented here.

The chat variable is returned by calling the startChat method on the model object. We’re passing it an empty object because we don’t have any chat history. If you wanted to store chat history, you could pass it as an argument to startChat in order to start the chat within the context of the previous conversation.

Lastly, we call the connectToGemini method to make sure that our method runs when the application starts.

Now let’s get to the meat of the project by writing a function to send user input to the LLM, and to process the responses. Add this code above the line where we run connectToGemini();

async function sendMessage(msg) {
  const result = await chat.sendMessageStream(msg);
  let text = ';
  for await (const chunk of result.stream) {
    const chunkText = chunk.text();
    console.log(chunkText);
    text += chunkText;
  }
}

This method takes in a msg parameter (a string) and uses sendMessageStream to send the message to Gemini, then awaits a response. We’re using sendMessageStream specifically to reduce wait time on the response, as the messages will come in line by line with a stream, as opposed to waiting for the entire response to be prepared.

From there, we’re taking individual chunks from the resulting stream and logging each chunk to the console. We’re also storing the entire response in the variable test

Prompting for user input

Now we need to do is give the user the ability to send a message! Let’s add the following, below our sendMessage method.

function askQuestion() {
  rl.question("Please enter your prompt: ", (prompt) => {
    if (prompt === "exit") {
      rl.close();
    } else {
      sendMessage(prompt);
    }
  });
}

This method accesses rl — which is our interface from the readline package we imported earlier. The first thing that it does is ask the user for a prompt to send to Gemini. If that prompt is equal to ‘exit’, we close out (so that users have an easy way to exit), otherwise, we send the prompt to our run method.

The last thing we need to do is tweak the sendMessage method to ensure that once Gemini has responded, the application prompts the user for more input. Let’s have that method call the askQuestion method, like so:

async function sendMessage(msg) {
  const result = await chat.sendMessageStream(msg);
  let text = ';
  for await (const chunk of result.stream) {
    const chunkText = chunk.text();
    console.log(chunkText);
    text += chunkText;
  }
  askQuestion();
}

Now, when the method is done processing a response, it will automatically ask the user for another prompt, allowing the conversation to continue!

And that’s how you build a basic chat application using Google Gemini with Node.js. All that’s left is to run node main.js and start chatting!

Continue Learning

Discover more articles on similar topics