Automated Moderation using OpenAI & Nuxt

Artikel uit Frontmania magazine 2023
With AI systems on the rise and within an arm’s reach these days, I think it’s valid to take a look at where they can play a supporting role in software development right now and for the future. The OpenAI platform allows you to make use of published algorithms in an accessible way. So it’s a perfect way to get your feet wet!
OpenAI is an AI research and deployment company with a self stated mission to ensure that artificial general intelligence benefits all of humanity. OpenAI has been making waves with GPT-3, which performs a variety of natural language tasks, Codex, which translates natural language to code, and DALL·E, which creates and edits original images.
GPT-3
The endpoints that OpenAI provides are all built on Generative Pre-trained Transformer 3 (GPT-3). The deep learning algorithm that is specifically trained to handle user text input. The GPT-3 algorithm has been making waves with the ChatGPT chatbot implementation which had mixed results. At this point, the results of the moderation endpoint are much more stable.
In this tutorial we’ll take a look at a specific API, namely the one that helps you in intercepting user input by applying natural language interpretation to use as moderation. This is a perfect example of leveraging AI to help with reducing the load for a very mundane task: assessing whether user submitted content should be flagged.
Let’s dive in!
To get started, you need to sign up at OpenAI and generate a key. The process should be fairly straightforward and there’s no payment details required. The trial offers a generous amount of usage for you to play around with various APIs.
Next we’ll set up a simple Nuxt project. I am using Nuxt because I believe it uses a very readable syntax coming from whatever background you might have. If you know HTML, JavaScript, you can very easily interpret the code. You should be able to recreate this example in any other framework with little effort though.
Using Nuxt, you can use the following commands to scaffold out a starter project and adding the OpenAI package (Nuxt docs prefer using yarn but feel free to follow the npm or pnpm steps):
npx nuxi init openai.moderation
cd openai.moderation
yarn add openai
yarn install
yarn dev -o
This should result in the starter project running on (typically) http://localhost:3000. Now open the project in your favorite IDE, and let’s get started!
Configure the key
Create an .env file in the root of the project containing this line (replace with your personal key), so we can use it to sign the requests to OpenAI:
OPENAI_API_KEY=ALWAYSKEEPSECRETSTOYOURSELF
Next, open the nuxt.config.ts and make sure it looks like this:
export default defineNuxtConfig({
runtimeConfig: {
OPENAI_API_KEY: process.env.OPENAI_API_KEY,
},
})
If you’re wondering where the imports are, Nuxt supports auto importing and thus resolves a lot of imports automatically for you. Neat!
Setting up the API
in the api folder and add these contents:
export default defineEventHandler(async event => {
const body: { message: string } = await readBody(event)
return body.message
})
This will just return whatever we post to the /api/moderate endpoint. Nuxt will set up the routing for us. Again something that Nuxt does out of the box. We’ll add some features to this file later on, but let’s create a means of inputting content first.
The input component
We’re going to create a small component that just takes in text input and that will hit the endpoint when submitting, so that we can validate the response in our little application.
Create a Moderate.vue file in a components folder in the root of the project, so we can work on a component.
Let’s start by defining the scripts using the script setup notation:
<script setup lang=”ts”>
interface ModerateResponse {
id: string;
model: string;
results: ModerateResults[];
}
interface ModerateResults {
categories: object;
category_scores: object;
flagged: boolean;
}
const input = r “);
const result = ref([] as ModerateResponse[]);
const onSubmit = async (): Promise<void> => {
const response: ModerateResponse = await $fetch(“/api/moderate”, {
method: “post”,
body: { message: input.value },
});
result.value.unshift(response);
input.value = “”;
};
</script>
First, we’re setting up a handle to take care of the input and the result and we’re defining a handler to call the endpoint we’ve already set up, appending the input as a message property on the body. (The .value refers to the mutable and reactive reference of both constants.)
Now we’ll add a template with:
- A small form containing an input;
- A submit button that will call the onSubmit handler;
- A place to display the output coming from the endpoint
Styling is not the core purpose of this tutorial, so I’m skipping it for now. Just go ahead and add this below the script tag:
<template>
<div>
<div class=”input”>
<input type=”text” v-model=”input” @keyup.enter=”onSubmit” />
<button type=”submit” @click=”onSubmit”>Validate moderation</button>
</div>
<div class=”output”>
<ul>
<li :key=”i” v-for=”(res, i) in result”>
{{ res }}
</li>
</ul>
</div>
</div>
</template>
Now save the Moderate.vue file and let’s load the component on the app.vue, by replacing it’s contents with this:
<template>
<div>
<Moderate />
</div>
</template>
You should now see the component running on your localhost. Once you insert some text and hit submit, that input should be returned by our own endpoint and show up in the component as part of the list item.
We’re adding new results to the top of the list, so you’ll get a nice historical overview of your submissions and their assessments by the AI endpoint.
Adding intelligence
Finally we’ll update the moderate.post.ts file to make use of the OpenAI capabilities. The moderation API is one of the more straightforward ones, so it’s a good one to get started with. Instead of returning the body.message immediately, we’ll first configure the OpenAI client by instantiating it with the configured key. Then we’ll query the endpoint with the contents of the message. This means you also need to change the handler to an async function!
The file should look like this:
import { Configuration, OpenAIApi } from “openai”;
// it’s an async function now!
export default defineEventHandler(async (event) => {
const body: { message: string } = await readBody(event);
// setup the configuration
const configuration = new Configuration({
apiKey: process.env.OPENAI_API_KEY,
});
// instantiate the openaiClient
const openaiClient = new OpenAIApi(configuration);
// Make the call to the moderation endpoint
const res = await openaiClient.createModeration({
input: body.message,
});
// return the result
return res.data;
});
That’s it. So you now have the opportunity to test this out by being very aggressive towards the input field.
Pro tip: words like “kill” don’t raise much suspicion, but sentences with intent, such as “I want to kill you” will be flagged!
{
“id”: “modr-XXXXXX”,
“model”: “text-moderation-004”,
“results”: [
{
“categories”: {
“hate”: false,
“hate/threatening”: false,
“self-harm”: false,
“sexual”: false,
“sexual/minors”: false,
“violence”: true,
“violence/graphic”: false
},
“category_scores”: {
“hate”: 0.000051981205615447834,
“hate/threatening”: 1.599089749504401e-8,
“self-harm”: 1.3528440945265174e-7,
“sexual”: 0.000009448853234061971,
“sexual/minors”: 7.66160965781637e-8,
“violence”: 0.95890212059021,
“violence/graphic”: 0.000002124314278262318
},
“flagged”: true
}
]
}
That’s it, congratulations! You’ve now leveraged AI to assess whether user input should be flagged or not. You can imagine adding the call to the endpoint at a point in your application where user input might get inserted for publication.
Let’s try a mock implementation of this then, shall we? We’re going to do a little refactoring, where we use moderation before posting user content. Let’s create a new endpoint where a user can Twoot messages to the world and those messages would get stored on your platform. Since its user inserted data that’s public we want to be sure that it’s not harmful in any way, so we’re adding moderation!
Create a file in the /server/api folder called twoot.post.ts with the following contents:
import { Configuration, OpenAIApi } from “openai”;
export default defineEventHandler(async (event) => {
const body: { message: string } = await readBody(event);
const moderation = await $fetch(“/api/moderate”, {
method: “post”,
body,
});
const result = moderation.results && moderation.results[0];
const { message } = body;
if (result?.flagged) {
const { categories } = result;
const reasons = Object.keys(categories).reduce((acc: string[], cat: string) => {
if (categories[cat]) acc = […acc, cat];
return acc;
}, [])
return (`The input ‘${message}’ was flagged . Reasons: ${reasons.join(‘, ‘)}`)
} else {
return `Data ‘${message}’ was not flagged. 👌 Saving to database…`
}
});
Let’s now make sure to change the endpoint in the Moderate.vue file to point to the twoot.post.ts endpoint. Change the line in the onSubmit handler from:
const response = await $fetch(“/api/moderate“, {
method: “post”,
body: { message: input.value },
});
to
const response = await $fetch(“/api/twoot“, {
method: “post”,
body: { message: input.value },
});
This way, the message that a user inserts to be Twooted, we now first assess the contents using our moderation endpoint and use that information to assess the sentiment of the contents before accepting the input.
Bear in mind though, that this is just an example and not a real world implementation. Also, as OpenAI suggests themselves, always keep some human eyes (Human in the Loop as they refer to it) on hand when dealing with these sorts of things. A valid use case for the example would be to preemptively flag submissions before a moderator steps in.
AI will be part of your future
Using AI to reduce t without completely removing them, would be the most sensible use of current capabilities. AI, just as humans, still has flaws, but we can utilize it to assist us in simple tasks.
Seeing the growth and how algorithms have evolved and matured for the past years, you can imagine it will start to become an integral part of software engineering and our lives as a whole.
If you’re done with this example, one of the fun ways to play around with OpenAI is by using the image generation API. Or take a look at the examples on their website, to spark some inspiration. With the basis we’ve laid you should be capable of either modifying the existing code, or making your own integration in a framework you prefer. So I highly encourage you to take a look at these tools yourself, because they are so well documented and fun to play around with.
Resources:
https://github.com/joranquinten/openai.nuxt.example
https://beta.openai.com/overview
https://beta.openai.com/examples
https://vuejs.org/guide/introduction.html
Bio
Jorans passion is getting people to love technology and getting technology to play nice. He works as an interaction developer with ♡ for web, tech, science & tinkering with stuff. Focussed on innovation and tech ambassador at Jumbo Supermarkten. Writes, tweets, toots and speaks every now and then. Loves to talk shop. Owns a cat.