ClickCease
React Native chatbot app created with OpenAI

How to build a React Native chatbot app with OpenAI and Amity Social Cloud — Part 1

Mark Worachote
Solutions Engineer
Android
iOS
Sep 8, 2023

Chat applications have become an essential component of our daily lives in today’s digital landscape. The way we connect through chat has moved from customer assistance to personal communication. Chatbots have emerged as game changers as Artificial Intelligence (AI) advances have made chats more efficient, engaging, and personalized.

Imagine building a chatbot that can understand and respond to users just like ChatGPT. In this tutorial series, we’ll embark on an exciting journey to create a React Native chatbot app that harnesses the power of OpenAI’s language models and the Amity Chat API and SDK, and advanced React Native Chat UI Kit. Whether you’re a seasoned developer or a curious newcomer, this series will empower you to build chat applications that stand out in today’s AI-driven world.

Part 1: Laying the Foundation

In this initial part of our tutorial series, we’ll establish the foundation for our React Native chatbot app. We’ll cover essential topics such as setting up your development environment, understanding the OpenAI API, and creating a TypeScript Node.js backend. By the end of this segment, you’ll have a robust backend infrastructure ready to integrate AI chatbot capabilities.

Now, let’s dive into the details of each step:

Step 1: Understanding OpenAI API

Before diving into the technical implementation, it’s essential to grasp the basics of OpenAI API. OpenAI provides an API that allows you to interact with their language models programmatically. This API enables you to send prompts to the model and receive generated text as responses. The prompts can be tailored to initiate conversations, ask questions, or perform any other text-based task you require.

Step 2: Setting Up Your OpenAI Account

To get started, you’ll need an OpenAI account. Head to the OpenAI website and sign up for an account. Once you have an account, you’ll be able to access the API key, which is essential for making requests to the OpenAI API.

Step 3: Obtaining Your API Key

After signing up and logging into your OpenAI account, navigate to the API section to generate an API key. This key will serve as the authorization token when making requests to the API. Keep this key secure and avoid sharing it publicly.

Step 4: Creating Your TypeScript Node.js Project

With OpenAI preparations complete, it’s time to set up your development environment for building the chatbot backend. We’ll use TypeScript and Node.js for this purpose. If you haven’t already, make sure Node.js is installed on your system.

Open your terminal and navigate to the directory where you want to create your project. Use the following command to initialize a new Node.js project:


npm init

This command will guide you through the process of creating a package.json file for your project.

Step 5: Installing TypeScript and Required Dependencies

To enable TypeScript support for your Node.js project, we’ll install TypeScript and the necessary type definitions. Use the following commands:


npm install typescript --save-dev
npm install @types/node --save-dev

Step 6: Creating a TypeScript Node.js Script with Webhook Support

In this step, we’ll modify our TypeScript Node.js script to handle incoming POST requests as a webhook. We’ll use the Express.js framework to create a simple webhook endpoint that logs incoming data and sends a response.

Install Required Dependencies

Begin by installing the required dependencies, Express.js and Axios, with the following command in your project directory:


npm install express axios

Configuring TypeScript and Compiling

Now that we’ve added Express.js to our project, we need to install its TypeScript type definitions. Run the following command:


npm install @types/axios @types/express --save-dev

Update the TypeScript Code

Modify the app.ts file to use Express.js and create a webhook endpoint that listens for POST requests. Replace the existing content of app.ts with the following code:


import express from 'express';
import * as http from 'http';
const app = express();
app.use(express.json());
// Webhook endpoint
app.post('/webhook', (req, res) => {
  const body = req.body;
  console.log('Received webhook data:', JSON.stringify(body));
  res.json({ message: 'Webhook data received successfully' });
});
app.listen(3000, () => {
  console.log('Server is running on port 3000');
});

In this code, we import the Express module, create an instance of the Express application, and define a webhook endpoint that handles incoming POST requests. The data received is logged, and a JSON response is sent to confirm successful data reception.

Step 7: Configuring TypeScript

Create a tsconfig.json file in the root of your project to configure TypeScript settings. Here's a basic example:


{
  "compilerOptions": {
    "target": "ES6",
    "module": "CommonJS",
    "outDir": "./dist"
  },
  "include": ["src/**/*.ts"],
  "exclude": ["node_modules"]
}

This configuration specifies that TypeScript files from the src directory will be compiled to the dist directory.

Step 8: Creating a Webhook URL with Ngrok

Now that we have our TypeScript Node.js application ready, it's time to create a webhook URL so we can receive text message events from Amity. Follow these steps and be sure to create Ngrok account first to get the authentication token.

Step 1: Install Ngrok

Now, let’s install Ngrok globally on your system


npm install -g ngrok

This command installs Ngrok and makes it accessible from any directory in your terminal.

Step 2: Create an Environment (.env) File

In your project root directory, create a new file and name it .evn. You can use a text editor or create it using the terminal:


touch .env

Open the .evn file using a text editor, and define your environment variables. For example:


PORT=3000 
OPENAI_API_KEY=your_openai_api_key 
NGROK_AUTH_TOKEN=your_ngrok_auth_token

Replace the values with your specific configuration. Here’s a brief explanation of the variables:

  • PORT : The port on which your Node.js server will listen.
  • OPENAI_API_KEY : Your OpenAI API key (if applicable for your project).
  • NGROK_AUTH_TOKEN : Your Ngrok authentication token.

Save and close the .evn file.

Step 3: Configure Your Node.js Application to Use Environment Variables

In your TypeScript code (e.g., app.ts), configure your Node.js application to use the environment variables you defined in the .env file. You can use a library like dotenv to load the environment variables into your project.

First, install dotenv as a dependency:


npm install dotenv

Then, at the top of your TypeScript file, import and configure dotenv


import express from 'express';
import * as http from 'http';
import dotenv from 'dotenv';

const app = express();
app.use(express.json());
const port = process.env.PORT || 3000;

// Webhook endpoint
app.post('/webhook', (req, res) => {
  const body = req.body;
  console.log('Received webhook data:', JSON.stringify(body));
  res.json({ message: 'Webhook data received successfully' });
});
// Create a basic HTTP server
const server = http.createServer(app);
server.listen(port, () => {
  console.log('Server is running on port 3000');
});

With this setup, you can access environment variables using process.env, such as process.env.PORT, process.env.OPENAI_API_KEY, and so on, throughout your application.

Step 4: Start Ngrok to Create an HTTPS Tunnel

Open a new terminal window, navigate to your project directory, and start your Node.js application:


tsc && node dist/app.js

Then, open a new terminal window and start Ngrok to create an HTTPS tunnel with the command below:


ngrok http 3000

Replace 3000 with the port number your Node.js server is listening on if it's different, for us we’ll be using port 3000 as declared in the .env file.

Step 5: Access Your TypeScript Node.js Application

You can now access your TypeScript Node.js application over HTTPS using the Ngrok-generated Forwarding URL. Open a web browser and enter the HTTPS URL provided by Ngrok. This URL will serve as our webhook URL, which we’ll use to subscribe to Amity’s Real-Time Events. The ngrok URL should look similar to the example below:


https://your-ngrok-url.ngrok.io

Don’t forget that our webhook path is /webhook therefore the webhook URL should look similar to this


https://your-ngrok-url.ngrok.io/webhook

Part 2: Getting Started on Amity

With our webhook URL ready, it’s time to integrate Amity into our chatbot application. Amity’s ability to publish real-time events will allow us to subscribe to these events, extract text messages sent by users, and input them into OpenAI for chatbot capabilities.

Prerequisites

  1. If you haven’t already registered for an Amity account, we recommend following our comprehensive step-by-step guide in the Amity Portal to create your new network.
  2. You’ll need to request for feature enablement to be able to use webhook, please submit your request here.

Step 1: Go to Amity Console

Simply navigating to Amity Console and log-in with your credentials.

Step 2: Navigate to Webhook Menu

Click on the “Webhook” menu on the left-hand side of the Amity Console.

Step 3: Add Webhook URL to Amity Console

Click on “Add URL” button, enter the newly created webhook URL (https://your-ngrok-url.ngrok.io/webhook) and click Submit.

Step 4: All Done

Congratulations! You’ve successfully subscribed to Amity Real-Time Events. Now, let’s focus on our Node.js application’s functionality.

Part 3: Connect Amity Chat Message to OpenAI

Now that we have established the webhook, it’s time to modify our webhook function to extract text information from Amity message events and send them to OpenAI.

Step 1: Create Object Types for Amity Message Events

In TypeScript, types are crucial for ensuring type safety, enhancing code quality, and improving productivity. Let’s create a types.ts file at the root of the project directory and define types for Amity message events. We'll include only the relevant fields in these types.


export interface MessageData {
    messages: Message[]
    users: User[]
}

export interface Message {
    _id: string
    type: string
    tags: any[]
    isDeleted: boolean
    createdAt: string
    editedAt: string
    channelSegment: number
    updatedAt: string
    childrenNumber: number
    path: string
    data: Data
    channelId: string
    userId: string
    messageId: string
    flagCount: number
    hashFlag: any
    mentionees: any[]
    reactionsCount: number
}

export interface Data {
    text: string
}

export interface User {
    _id: string
    path: string
    displayName: string
    updatedAt: string
    createdAt: string
    isDeleted: boolean
    userId: string
    roles: string[]
    flagCount: number
    hashFlag: any
}

Step 2: Extract Amity Message Event Information

Let’s update our webhook function to convert the received object into MessageEventData and extract the message's text.


import express, { Express, Request, Response } from 'express';
import dotenv from 'dotenv';
import util from 'util'; // Import the util module
import { MessageData } from './types';

dotenv.config();

const app: Express = express();
const port = process.env.PORT;
app.use(express.json());

app.get('/', (req: Request, res: Response) => {
    res.send('Express + TypeScript Server');
});
app.post('/webhook', (req: Request, res: Response) => {
    if(req.body.event == "message.didCreate"){ // We'll focus only on message event
        const messageData: MessageData = req.body.data;
        const messageText = messageData.messages[0].data.text // get message's text
        console.log("printing webhook event " + util.inspect(messageData, { depth: null }));
       
    }
    res.send('Received Amity webhook event');
});

app.listen(port, () => {
    console.log(`⚡️[server]: Server is running at http://localhost:${port}`);
});

Step 3: Send Message Text Data to OpenAI

Now that we have extracted the message’s text data, it’s time to send it to OpenAI for processing and obtain our chatbot’s response. First, let’s update our webhook function to call the OpenAI API with the received message text.


import express, { Express, Request, Response } from 'express';
import dotenv from 'dotenv';
import util from 'util'; // Import the util module
import { MessageData } from './types';
import axios, { AxiosRequestConfig } from 'axios';

dotenv.config();

const app: Express = express();
const port = process.env.PORT;
app.use(express.json());

app.get('/', (req: Request, res: Response) => {
    res.send('Express + TypeScript Serverr');
});
app.post('/webhook', (req: Request, res: Response) => {
    if(req.body.event == "message.didCreate"){
        const messageData: MessageData = req.body.data;
        const messageText = messageData.messages[0].data.text
        const response = getChatCompletions(messageText)
        
        console.log("printing webhook event " + util.inspect(messageData, { depth: null }));
       
    }
    res.send('Received Amity webhook event');
});

async function getChatCompletions(message:String) {
    const axiosConfig: AxiosRequestConfig = {
        method: 'POST',
        url: 'https://api.openai.com/v1/chat/completions',
        headers: {
            'Content-Type': 'application/json',
            'Authorization': `Bearer ${process.env.OPENAI_API_KEY}`,
        },
        data: {
            model: 'gpt-3.5-turbo',
            messages: [
                { role: 'user', content: message },
            ],
            temperature: 0.7,
        },
    };

    try {
        const response = await axios(axiosConfig);
        console.log("Check openAI response text "+response.data.choices[0].message.content);
        const responseText = response.data.choices[0].message.content
       
        return responseText
        
    } catch (error) {
        console.error('Error:', error);
    }
}


app.listen(port, () => {
    console.log(`⚡️[server]: Server is running at http://localhost:${port}`);
});

In this updated code, we have added a section dedicated to interacting with the OpenAI API to obtain a chatbot response. Here’s a brief explanation of the OpenAI-related section:

  1. Async Function for OpenAI Interaction:

async function getChatCompletions(message: string) {     // ... }

This is an asynchronous function named getChatCompletions, responsible for making a request to the OpenAI API to obtain chatbot completions based on the input message.

2. Axios Request Configuration:


const axiosConfig: AxiosRequestConfig = {
    method: 'POST',
    url: 'https://api.openai.com/v1/chat/completions',
    headers: {
        'Content-Type': 'application/json',
        'Authorization': `Bearer ${process.env.OPENAI_API_KEY}`,
    },
    data: {
        model: 'gpt-3.5-turbo',
        messages: [
            { role: 'user', content: message },
        ],
        temperature: 0.7,
    },
};

This part defines the configuration for the Axios HTTP request sent to the OpenAI API. It specifies the method, URL, headers (including the API key), and the JSON data payload that includes the user’s message and other parameters like the model and temperature.

3. Try-Catch Block for Making the Request:


try {
    const response = await axios(axiosConfig);
    // ...
} catch (error) {
    console.error('Error:', error);
}

Within the try block, Axios is used to send the request to the OpenAI API. The response from the API is stored in the response variable for further processing. If any errors occur during the request, they are caught in the catch block, and an error message is logged.

4. Handling OpenAI Response:


console.log("Check openAI response text " + response.data.choices[0].message.content);
const responseText = response.data.choices[0].message.content;
return responseText;

After receiving a successful response from the OpenAI API, the code logs the chatbot’s response content and extracts it from the response object. The chatbot’s response is stored in the responseText variable and returned from the function.

Part 4: Send openAI message back to the user through Amity API

Now, we will send the response back to the user by calling the Amity send message API and passing the OpenAI response as the API’s request body.

Step 1: Get the Amity channelID

Let’s update our webhook function to store the message sender ID and channel ID where the conversation is happening, so we can send the message to the correct conversation.


app.post('/webhook', (req: Request, res: Response) => {
    if(req.body.event == "message.didCreate"){
        const messageData: MessageData = req.body.data;
        const messageText = messageData.messages[0].data.text
        const channelId = messageData.messages[0].channelId
        const response = getChatCompletions(messageText)
        
        console.log("printing webhook event " + util.inspect(messageData, { depth: null }));
       
    }
    res.send('Received Amity webhook event');
});

Step 2: Authorize with Amity to get access token

Amity’s robust security system is fortified by the use of access tokens. These tokens play a vital role in ensuring the safety of the system by allowing only authorized interactions. They act as protective shields, enhancing the overall security and integrity of the platform.

So first let’s update our .env file to store Amity API Key and chatbot userID


PORT=3000 
OPENAI_API_KEY=your_openai_api_key 
NGROK_AUTH_TOKEN=your_ngrok_auth_token
AMITY_API_KEY=your_amity_api_key
AMITY_USER_ID=your_chatbot_amity_user_id

We’ve added 2 more fields here

  • AMITY_API_KEY: Your Amity API key (Which you can check at Amity Console => Security menu at the bottom of the sidebar).
  • AMITY_USERID_KEY: Your unique Amity Chatbot user ID ( Can be anything e.g. XYZ )

Then create a dedicated function to get the access token


// Define a function to make the Amity API request
async function createAmitySession() {
    const AMITY_API_KEY = process.env.AMITY_API_KEY;
    const AMITY_USER_ID = process.env.AMITY_USER_ID;
    const amityRegion = "sg" // Add your Amity application's server region here either it's eg, eu or us

    const axiosConfig: AxiosRequestConfig = {
        method: 'POST',
        url: `https://api.${amityRegion}.amity.co/api/v3/sessions`,
        headers: {
            'x-api-key': AMITY_API_KEY,
            'Content-Type': 'application/json',
        },
        data: {
            userId: AMITY_USER_ID,
            displayName:"Chatbot", // Can change to any display name
            deviceId: AMITY_USER_ID,
        },
    };

    try {
        const response = await axios(axiosConfig);
        console.log('Amity Session Created:', response.data.accessToken);
        // You can further process the Amity API response here if needed.
    } catch (error) {
        console.error('Error Creating Amity Session:', error);
    }
}

Step 3: Create a message and send it back to user

Now that we have the access token generation function, let’s call Amity send message API to send the message back to the user.

First let’s create a send message function


async function sendMessageToAmity(channelId:String, message: String) {
    const accessToken = createAmitySession()
    const amityRegion = "sg" // Add your Amity application's server region here either it's eg, eu or us
    const apiUrl = `https://api.${amityRegion}.amity.co/api/v3/messages`;

    const axiosConfig: AxiosRequestConfig = {
        method: 'POST',
        url: apiUrl,
        headers: {
            'Authorization': `Bearer ${accessToken}`,
            'Content-Type': 'application/json',
        },
        data: {
            channelId: channelId,
            type: 'text',
            data: {
                text: '1',
            },
        },
    };

    try {
        const response = await axios(axiosConfig);
        console.log('Message sent:', response.data);
        
        // You can further process the response here if needed.
    } catch (error) {
        console.error('Error sending message:', error);
    }
}

Then let’s update our webhook function to call this method after we have received response from openAI


app.post('/webhook', (req: Request, res: Response) => {
    if(req.body.event == "message.didCreate"){
        const messageData: MessageData = req.body.data;
        const messageText = messageData.messages[0].data.text
        const channelId = messageData.messages[0].channelId
        getChatCompletions(messageText).then((responseText) => {
            if (responseText !== undefined) {
                sendMessageToAmity(channelId, responseText);
            }

            console.log("printing webhook event " + util.inspect(messageData, { depth: null }));
        })
            .catch((error) => {
                console.error('Error:', error);
                res.sendStatus(500); // Handle the error appropriately
            });
    }   
    res.send('Received Amity webhook event');
});

However, this code can cause an infinite loop as when the message is created, Amity will publish a Real Time Event which will then enter the webhook function and then the if condition so let’s modify our app.ts to handle this case

First let’s declare an array of string to stored created messageId.


// Define an array to store processed message IDs
const processedMessageIds: string[] = [];

We’ll be storing any message Ids created by the chatbot, then let’s update our create message function to stored created messageID into the array


function sendMessageToAmity(channelId: string, response: string) {
    createAmitySession()
        .then((accessToken) => {
            const amityRegion = "sg"; // Add your Amity application's server region here either it's eg, eu or us
            const apiUrl = `https://api.${amityRegion}.amity.co/api/v3/messages`;

            const axiosConfig: AxiosRequestConfig = {
                method: 'POST',
                url: apiUrl,
                headers: {
                    'Authorization': `Bearer ${accessToken}`, // Use the accessToken obtained from createAmitySession()
                    'Content-Type': 'application/json',
                },
                data: {
                    channelId: channelId,
                    type: 'text',
                    data: {
                        text: response,
                    },
                },
            };

            axios(axiosConfig)
                .then((response) => {
                    const messageData = response.data as MessageData
                    const messageId = messageData.messages[0].messageId
                    processedMessageIds.push(messageId);
                    console.log('Message sent:', messageData);

                    // You can further process the response here if needed.
                })
                .catch((error) => {
                    console.error('Error sending message:', error);
                });
        })
        .catch((error) => {
            console.error('Error creating Amity session:', error);
        });
}

Finally, let’s update our webhook function to prevent infinite loop


app.post('/webhook', (req: Request, res: Response) => {
    if (req.body.event == "message.didCreate") {
        const messageData: MessageData = req.body.data;
        const messageText = messageData.messages[0].data.text;
        const channelId = messageData.messages[0].channelId;
        const messageId = messageData.messages[0].messageId; // Assuming the message ID is available in the data

        // Check if the message ID has already been processed
        if (!processedMessageIds.includes(messageId)) {
            // Add the message ID to the processed list to prevent duplicate processing

            getChatCompletions(messageText)
                .then((responseText) => {
                    if (responseText !== undefined) {
                        sendMessageToAmity(channelId, responseText);
                    }

                    console.log("printing webhook event " + util.inspect(messageData, { depth: null }));
                })
                .catch((error) => {
                    console.error('Error:', error);
                    res.sendStatus(500); // Handle the error appropriately
                });
        }
    }
    res.send('Received Amity webhook event');
});

With these changes, we can now prevent infinite loop!

Conclusion

Congratulations! In this part of the tutorial series, we’ve laid the foundation for our React Native chatbot app by setting up a TypeScript Node.js backend, configuring webhooks with Amity, and integrating the OpenAI API to process user messages and provide chatbot responses.

In the next part of the series, we’ll dive into building the React Native front-end for our chatbot app. We’ll explore how to create a user-friendly interface and establish real-time communication between the app and our backend.

Now let’s continued for Part 2: Building the React Native front-end to continue your journey in creating an AI-powered chatbot React Native app that combines the best of Amity and OpenAI.

If you want to know more about Amity’s features, feel free to explore more on our website. And if you’re certain that a ready-made solution is more suitable for your business vision and goal, begin your journey by contacting us!