“Within three to eight years, we will have a machine with the general intelligence of an average human being…if we're lucky, they might decide to keep us as pets.” — Marvin Minsky, 1970. Founder of the MIT AI Labs, Turing Award winner, advisor to Stanley Kubrick for 2001: A Space Odyssey.
Coming up on 53 years since that quote, I'd say we've done okay in terms of averting a robot uprising.
However, we have reached a point where we can teach machines to learn, and generate text and images based on what they learn. Memorizing statistical patterns isn't intelligence, of course, but with the advent of Large Language Models (LLMs) — like BERT, OpenAI's GPT-3 and the new ChatGPT — that approach near-human levels of understanding of the concept of language, we can even use AI to aid human accessibility!
Think Chatbots/helpers that don't need to tell you “Press 3 if you're having trouble connecting to our servers”, but understand that you want to solve a connectivity issue when you tell it, “My game freezes at the login screen! Pls help!!1”
It follows, then, that we can use these LLMs — in an “ask a question, get an answer” capacity — to provide personalized, accessible, on-demand assistance to students, helping them to learn at their own pace and according to their own needs and abilities. For e-learning, such instant feedback and support that doesn't need the current lecture to be interrupted (and indeed, doesn't even need questions to only be within regular class hours), only compliments a course, making it more accessible and more engaging for students.
But enough with these thought experiments. Let's try our hand at building just such a frontend integration — a chat helper that can use OpenAI to answer a potential student's questions, without them having to tab out of the course!
What would the tech stack look like?
Something like this.
- A Next.js frontend that models an e-learning platform, with a Chat Helper/Assistant/Chatbot/whatever-you-want-to-call-it component that students can type questions into, and receive answers from.
- A Node.js/Express API that receives the questions from the frontend, proxies them onto the OpenAI ChatGPT servers, and serves the answers in response. The ChatGPT API is not public yet, but we can use the unofficial
chatgpt
package for our purposes. - A backend-for-frontend (BFF) using WunderGraph, a free and open source dev tool that uses GraphQL at build time only, serving data via secure JSON-over-RPC. The WunderGraph server will be a service layer, or API gateway, whatever you wish to call it, that serves as the only ‘backend' that your frontend can see.
Why WunderGraph?
Why use a BFF pattern in the first place? Why not simply deal in GET/POST calls to your API from the frontend? Okay, let's indulge this hypothetical for a second. This is what your architecture might look like in that case.
Your frontend is now tightly coupled with your backend. You will have to commit hundreds of lines of code to your frontend repo just to orchestrate two-way communication between your frontend and the many microservices and APIs that your app uses. If any of them are nascent or fluid technology — ChatGPT being the prime example — you're going to frequently end up diving into your frontend code to make the necessary changes to the underlying wiring to make sure everything keeps working. Not ideal.
Using WunderGraph as a backend-for-frontend decouples the frontend — for any set of clients — from the backend, simplifying maintenance, and the two-way communication between the two, by using GraphQL at build time only to turn this whole operation into simple queries and mutations, with complete end-to-end type safety, and data from all your data sources consolidated into a single, unified virtual graph — served as JSON-over-RPC.
This way, you can parallelize all of your microservices/API calls, fetching the exact data each client needs in one go, with reduced waterfalls for nested data, and autocomplete for all your data fetching…all without the typical pain points of GraphQL, i.e. large client bundles and caching/security headaches.
Your app doesn't even need to offer a GraphQL endpoint; you're only harnessing its power for massive DX wins.
The Code
⚠ DISCLAIMER : All code for interacting with ChatGPT in this tutorial is confirmed working as of December 19th, 2022 (‘chatgpt'='3.3.1'). However, OpenAI's ChatGPT, and the chatgpt library by Travis Fischer, are constantly changing, and breaking changes will be inevitable. Please check here if you find the code no longer works.
Part 1: Express, and the ChatGPT library
Step 0: Dependencies
Within the project root, create a directory for your API (./backend works fine), CD into it, and then type these in.
npm install express dotenv
npm install chatgpt
and optionally…
npm install puppeteer
We're using Express for the API, dotenv for environment variables, and an unofficial ChatGPT API (until OpenAPI releases the official public API).
OpenAPI has added CloudFlare protection to ChatGPT recently, making it harder to use the unofficial API. This library (optionally) uses puppeteer under the hood to automate bypassing these protections — all you have to do is provide your OpenAI email and password in an .env file.
Step 1: The Server
import { ChatGPTAPI, getOpenAIAuth } from "chatgpt";
import * as dotenv from "dotenv";
dotenv.config();
import express from "express";
const app = express();
app.use(express.json());
const port = 3001;
async function getAnswer(question) {
// use puppeteer to bypass cloudflare (headful because of captchas)
const openAIAuth = await getOpenAIAuth({
email: process.env.OPENAI_EMAIL,
password: process.env.OPENAI_PASSWORD,
// isGoogleLogin: true // uncomment this if using google auth
});
const api = new ChatGPTAPI({ ...openAIAuth });
await api.initSession();
// send a message and wait for the response
const response = await api.sendMessage(question);
// response is a markdown-formatted string
return response;
}
// GET
app.get("/api", async (req, res) => {
// res.send({ data: await example() });
res.send({
question: "What is the answer to life, the universe, and everything?",
answer: "42!",
});
});
// POST
app.post("/api", async (req, res) => {
// Get the body from the request
const { body } = req;
console.log(body.question); // debug
res.send({
question: body.question,
answer: await getAnswer(body.question),
});
});
app.listen(port, () => {
console.log(`Example app listening on port ${port}`);
});
The API server code itself is pretty self-explanatory. It receives a question in the request body, uses ChatGPT to send it to OpenAI servers, awaits an answer, and serves both in this response format:
{ question : “somequestion”, answer : “someanswer”}
Step 2: The OpenAPI Spec
WunderGraph works by introspecting your data sources and consolidating them all into a single, unified virtual graph, that you can then define operations on, and serve the results via JSON-over-RPC. For this introspection to work on a REST API, you'll need OpenAPI (you might also know this as Swagger) Specification for it.
An OpenAPI/Swagger specification is a human-readable description of your RESTful API. This is just a JSON or YAML file describing the servers an API uses, its authentication methods, what each endpoint does, the format for the params/request body each needs, and the schema for the response each returns.
Fortunately, writing this isn't too difficult once you know what to do, and there are several libraries that can automate it.
Here's the OpenAPI V3 spec for our API, in JSON.
{
"openapi": "3.0.0",
"info": {
"title": "express-chatgpt",
"version": "1.0.0",
"license": {
"name": "ISC"
},
"description": "OpenAPI v3 spec for our API."
},
"servers": [
{
"url": "http://localhost:3001"
}
],
"paths": {
"/api": {
"get": {
"summary": "/api",
"responses": {
"200": {
"description": "OK",
"content": {
"application/json": {
"schema": {
"type": "object",
"properties": {
"question": {
"type": "string",
"description": "The question to be asked",
"example": "What is the answer to life, the universe, and everything?"
},
"answer": {
"type": "string",
"description": "The answer",
"example": "42!"
}
}
}
}
}
}
},
"tags": []
},
"post": {
"summary": "/api",
"requestBody": {
"required": true,
"content": {
"application/json": {
"schema": {
"type": "object",
"properties": {
"question": {
"type": "string",
"description": "The question to be asked",
"example": "What is the answer to life, the universe, and everything?"
}
}
}
}
}
},
"responses": {
"200": {
"description": "OK",
"content": {
"application/json": {
"schema": {
"type": "object",
"properties": {
"question": {
"type": "string",
"description": "The question to be asked",
"example": "What is the answer to life, the universe, and everything?"
},
"answer": {
"type": "string",
"description": "The answer",
"example": "42!"
}
}
}
}
}
}
}
}
}
}
}
Part 2: Next.js + WunderGraph
Step 0: The Quickstart
We can set up our Next.js client and the WunderGraph BFF using the create-wundergraph-app
CLI. CD into the project root (and out of your Express backend directory), and type in:
npx create-wundergraph-app frontend -E nextjs
Then, CD into the directory you just asked the CLI to create:
cd frontend
Install dependencies, and start :
npm i && npm start
That'll boot up the WunderGraph AND Next.js servers (leveraging the npm-run-all
package), giving you a Next.js splash page at localhost:3000
with an example query. If you see that, everything's working.
Step 1: Setting up WunderGraph
WunderGraph can introspect pretty much any data source you can think of — microservices, databases, APIs — into a secure, typesafe JSON-over-RPC API; OpenAPI REST, GraphQL, PlanetScale, Fauna, MongoDB, and more, plus any Postgres/SQLite/MySQL database.
So let's get right to it. Open wundergraph.config.ts
in the .wundergraph
directory, and add our REST endpoint as one such data source our app depends on, and one that WunderGraph should introspect.
const chatgpt = introspect.openApi({
apiNamespace: "chatgpt",
source: {
kind: "file",
filePath: "./chatgpt-spec.json", // path to your openAPI spec file
},
requestTimeoutSeconds: 30, // optional
});
// add this data source to your config like a dependency
configureWunderGraphApplication({
apis: [chatgpt],
});
//...
A real-world app will of course have more than just one REST endpoint, and you'd define them just like this. Check out the different types of data sources WunderGraph can introspect here, then define them accordingly in your config.
Once you've run npm start, WunderGraph monitors necessary files in your project directory automatically, so just hitting save here will get the code generator running and it'll generate a schema that you can inspect (if you want) — the wundergraph.app.schema.graphql
file within /.wundergraph/generated
.
Step 2: Defining your Operations using GraphQL
This is the part where we write queries/mutations in GraphQL to operate on WunderGraph's generated virtual graph layer, and get us the data we want.
So go to ./wundergraph/operations
and create a new GraphQL file. We'll call it GetAnswer.graphql
.
So this is our mutation to send in a question to our Express API (as a String), and receive an answer (with the original question included, for either your UI, or just logging).
mutation ($question: String!) {
result: chatgpt_postApi(postApiInput: { question: $question }) {
question
answer
}
}
Mind the namespacing! Also, notice how we've aliased the chatgpt_postApi
field as result.
You'd also define your data fetching operations for other data sources in .graphql files just like this one.
Each time you've hit save throughout this process, WunderGraph's code generation has been working in the background (and it will, as long as its server is running), generating typesafe, client-specific data fetching React hooks (useQuery, useMutation
, etc.) on the fly for you (using Vercel's SWR under the hood). These are what we'll be using in our Next.js frontend.
Step 3: Building the UI
Our UI really needs just two things for a minimum viable product. A content area where you'd show your courses, tutorials, or any kind of content that you offer, and a collapsible Chat Assistant/Chatbot interface — that uses one of the hooks we just talked about, useMutation
.
pages/app.tsx
import Head from "next/head";
function MyApp({ Component, pageProps }) {
return (
<>
<Head>
<meta charSet="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<script src="https://cdn.tailwindcss.com"></script>
</Head>
<main>
<Component {...pageProps} />
</main>
</>
);
}
export default MyApp;
Note that I'm using Tailwind for this app. Tailwind is a fantastic utility-first CSS, and easily incorporated into any app via its Play CDN (though you'll probably want to switch to the PostCSS implementation for production).
pages/index.tsx
import { NextPage } from "next";
import { withWunderGraph } from "../components/generated/nextjs";
// my components
import ChatHelper from "../components/ChatHelper";
import NavBar from "../components/Header";
const Home: NextPage = () => {
return (
<>
<div className="dark:bg-gray-800 min-h-screen">
<NavBar />
<div className="container flex mx-auto mt-10 h-fit">
<div className="bg-gray-300 p-4 rounded-md">
{/* page content */}
<h2 className="font-bold text-xl"> Page content goes here </h2>
<div>
{/* video/course content here */}
<iframe
className="rounded-lg w-full h-80 mt-4 "
src="https://www.youtube.com/embed/njX2bu-_Vw4"
title="YouTube video player"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
allowFullScreen
></iframe>
</div>
<p className="mt-4">{/* some lipsum here */} </p>
</div>
<div className="absolute bottom-0 right-0 m-4">
<ChatHelper />
</div>
</div>
</div>
</>
);
};
export default withWunderGraph(Home); // for SSR with WunderGraph
components/NavBar.tsx
const NavBar = () => {
return (
<header className="bg-gray-400 p-4 shadow-md">
<div className="container mx-auto flex items-center justify-between">
<a href="#" className="font-bold text-xl">
My Website
</a>
<nav>
<a href="#" className="px-4 hover:underline">
Home
</a>
<a href="#" className="px-4 hover:underline">
FAQ
</a>
</nav>
</div>
</header>
);
};
export default NavBar;
The NavBar isn't really necessary for this example; this is just a go-to component I throw into all of my projects during layouting to make things prettier 😅.
components/ChatHelper.tsx
import { useState } from "react";
import { useMutation } from "../components/generated/nextjs";
const ChatHelper = () => {
const placeholderAnswer = "Hi! What did you want to learn about today?";
const [isExpanded, setIsExpanded] = useState(false);
const [input, setInput] = useState("");
const { data, isMutating, trigger } = useMutation({
operationName: "GetAnswer",
});
return (
<div
className={`relative rounded-lg bg-white shadow-md ${
isExpanded ? "expanded" : "collapsed"
}`}
>
<button
className="p-4 absolute top-0 right-0 mr-2"
onClick={() => setIsExpanded(!isExpanded)}
>
<span>{isExpanded ? "❌" : "💬"}</span>
</button>
<div className="max-w-lg max-h-lg p-4">
{isExpanded && (
<>
<h2 className="text-lg font-bold">Helper</h2>
<p id="answer" className="text-blue-500 font-bold">
{data ? data.chatgpt_postApi?.answer : placeholderAnswer}
</p>
<p id="ifLoading" className="text-green-500 font-bold font-italics">
{isMutating ? "ChatGPT is thinking..." : ""}
</p>
<form
onSubmit={(event) => {
event.preventDefault();
if (input) {
trigger({
question: input,
});
}
}}
>
<input
className="border rounded-md p-2 w-full"
type="text"
placeholder="Your question here."
onChange={(event) => {
const val = event.target.value;
if (val) {
// set question
setInput(val);
}
}}
/>
<button className="bg-blue-500 text-white rounded-md p-2 mt-2">
Help me, Obi-Wan Kenobi.
</button>
</form>
</>
)}
</div>
</div>
);
};
export default ChatHelper;
The useMutation
hook is called only when you call trigger on form submit with an input (i.e. the question; which will end up being the request body in the Express backend). This is pretty intuitive, but for further questions regarding trigger, check out SWR's documentation here.
And you're done! Provided the ChatGPT servers aren't under heavy load, you should be able to type in a question, hit the button, and see an answer stream in.
Where to go from here?
Hopefully, this tutorial has given you an insight into how you can use ChatGPT for your own use cases, writing APIs and generating OpenAPI documentation for it so you can use it with WunderGraph as a BFF to make querying a cinch.
Going forward, you'll probably want to add a <ul>
list of canned/pre-selected questions (based on the current course) that when clicked, are passed to the <ChatHelper>
component as questions, so your students have a list of suggestions for where to start asking questions.
Other than that, you could also use the conversationId
and messageId
in the result object, and pass them to sendMessage
as conversationId
and parentMessageId
respectively to track the conversation with the bot, and add to it an awareness of questions asked immediately before — so your students can ask follow-up questions to get more relevant information and make the conversation flow more naturally.
// send a follow-up
res = await api.sendMessage("Can you expand on that?", {
conversationId: res.conversationId,
parentMessageId: res.messageId,
});
console.log(res.response);
// send another follow-up
res = await api.sendMessage("What were we talking about?", {
conversationId: res.conversationId,
parentMessageId: res.messageId,
});
Additionally, keep an eye on the chatgpt
library itself, as OpenAI frequently changes how ChatGPT's research preview works, so you'll want to make sure your code keeps up with the unofficial API as it is updated accordingly.
Finally, if you want to know more about WunderGraph's many use cases, check out their Discord community here!