Build an AI Chatbot¶
In this tutorial, you will learn how to create an AI chatbot using WSO2 MI, enabling seamless integration and hassle-free deployment.
What you will build¶
In this tutorial, you will implement a chatbot, which will serve customer requests. The chatbot will be deployed as the Chat
API in the WSO2 Micro Integrator, enabling seamless integration and hassle-free deployment.
What you will learn¶
- How to create an Integration API for building an AI chatbot.
- How to deploy and test the chatbot integration.
Prerequisites¶
-
You need Visual Studio Code (VS Code) with the Micro Integrator for VS Code extension installed. The MI for VS Code extension is the official developer tool for designing, developing, and testing integration solutions with WSO2 Micro Integrator.
Info
See the Install Micro Integrator for VS Code documentation to learn how to install Micro Integrator for VS Code.
-
You need an OpenAI API key to proceed. Visit the OpenAI API Documentation for more details on obtaining and managing your API key.
Follow the instructions below to create your first Integration API.
Step 1 - Create a new integration project¶
To develop the above scenario, let us get started with creating an integration project in the Micro Integrator extension installed VS Code.
-
Launch VS Code with the Micro Integrator extension installed.
-
Click on the Micro Integrator icon on the Activity Bar of the VS Code editor.
-
Click Create New Project on Design View. For more options for creating a new integration project, see Create an Integration Project.
-
In the Project Creation Form, enter
BankIntegration
as the Project Name. -
Ensure
4.4.0
is selected as the Micro Integrator runtime version. -
Provide a location for the integration project under Project Directory.
-
Click Create.
Once you click Create, the Add Artifact pane will be opened.
Note
You need the following to work with the MI for VS Code extension.
- Java Development Kit (JDK) version 21
- WSO2 Micro Integrator (MI) 4.4.0 runtime
If you don't have them installed on your local machine, these will be automatically prompted for downloading and configured by the Micro Integrator for VS Code extension during the project creation step:
-
Click Download Java & MI to download and set up Java and MI runtime.
Info
If a different JDK or WSO2 MI version is installed on your local machine, you'll be prompted to download the required versions.
- Click Download to install the required JDK or/and MI version(s).
- Once the download is complete, configure the Java Home or/and MI Home paths by clicking Select Java Home or/and Select MI Path, respectively.
If the required JDK and WSO2 MI versions are already installed, you can directly configure the Java Home and MI Home paths in this step by clicking Select Java Home and Select MI Path, respectively.
Once the process is complete, a window reload will be required, and you will be prompted with the following message:
-
Click Reload Window.
Step 2 - Create an API¶
Now the integration project is ready to add an API. In this scenario, the API responds to the client with the response from the LLM. First, let us create an API.
-
In the Add Artifact interface, under Create an Integration, click on API. This opens the API Form.
-
Enter
ChatAPI
as the API Name. The API Context field will be automatically populated with the same value. -
Click Create.
Once you create the API, a default resource will be automatically generated. You can see this default resource listed in the Service Designer under Available resources. You will use this resource in this tutorial.
Step 3 - Design the integration¶
Now it is time to design your API. This is the underlying logic that's executed behind the scenes when an API request is made. In this scenario, you need to send the user request to the LLM and send back the response from LLM to user. For that, you have to add a Chat operation of AI Module. Follow the below steps to add a Chat operation.
What is a connector?
- Connectors in WSO2 Micro Integrator (MI) enable seamless integration with external systems, cloud platforms, and messaging services without the need for custom implementations. They provide a standardized way to send, receive, and process data from third-party applications like Salesforce, Kafka, and AWS services. To explore connectors in detail, see the Connector documentation.
- In VS Code, you can view all available connectors by clicking Add Module under the Mediators tab in the Mediator Palette.
-
Change the method of the default resource to
POST
by clicking the three dots icon in the resource. -
Open the Resource View of the API resource by clicking the
POST
resource under Available resources on Service Designer. -
Define the expected payload in the start node to streamline the development process.
Note
Setting an input payload for the integration flow is not mandatory. However, it is recommended, as it will be used to enable expression suggestions, which you will explore in later steps of this tutorial.
Below is the expected payload for this API resource:
{ "userID":"abc123", "query":"Hi!" }
-
Once you open the Resource View, click on the + icon on the canvas to open the Mediator Palette.
-
Under Mediators, select the + Add Module.
-
In the Add Modules pane, type
AI
in the Search field to locate the AI Module. Click Download to install the module. -
Select the
chat
operation from the mediator panel. -
Select the + Add new connection in the LLM Connection field.
-
On the Add New Connection page, choose your desired
LLM Provider
. For this tutorial, we will be usingOpenAI
as the provider. -
Complete the connection form by entering
OPENAI_CONN
as the Connection Name and providing your API key in the OpenAI Key field.Note
Refer to the OpenAI API Documentation to obtain your API key.
-
For this tutorial, we will use file-based memory to store conversation history.
Note
The memory connection is used to store the conversation history. This is useful for maintaining context in a conversation, especially when dealing with multiple turns of dialogue.
Warning
The file-based memory connection is not suitable for production use cases. It is intended only for development purposes. For production applications, it is recommended to use the database based memory.
- Click on the + Add new connection in the Memory Connection field.
- In the Add New Connection page, select
FILE_MEMORY
as the Memory Type. - Enter
FILE_MEMORY_CONN
as the Connection Name. - Submit the form.
-
Now, we shall complete the Chat operation configuration by filling the
User Query/Prompt
field with the payload value.Here, we will use the
query
value from the request payload as the user query for the Chat operation. Follow these steps to set it up:- Click the fx icon next to the User Query/Prompt field to open the Expression Editor.
- In the Expression Editor, choose Payload and select the
query
field. - Click Add to insert the expression into the field.
-
Next, enable the Overwrite Body option to ensure the API response body is replaced with the AI's output.
Otherwise, we need to manually set the response body using the Payload Mediator.Next, add a Respond Mediator to send the AI's response back to the client.
-
Click on the + icon placed just after the Chat operation to open the Mediator Palette.
-
Select the Respond mediator from the Mediator Palette, and click Add to add it to the integration flow.
You may refer to the following API configuration for reference,
Chat API
Info
You can view the source view by clicking on the Show Source (</>
) icon located in the top right corner of the VS Code.
<?xml version="1.0" encoding="UTF-8"?>
<api context="/chatapi" name="ChatAPI" xmlns="http://ws.apache.org/ns/synapse">
<resource methods="POST" uri-template="/">
<inSequence>
<ai.chat>
<connections>
<llmConfigKey>OPENAI_CONN</llmConfigKey>
<memoryConfigKey>FILE_MEMORY_CONN</memoryConfigKey>
</connections>
<sessionId>{${payload.userID}}</sessionId>
<prompt>${payload.query}</prompt>
<outputType>string</outputType>
<responseVariable>ai_chat_1</responseVariable>
<overwriteBody>true</overwriteBody>
<modelName>gpt-4o</modelName>
<temperature>0.7</temperature>
<maxTokens>4069</maxTokens>
<topP>1</topP>
<frequencyPenalty>0</frequencyPenalty>
<maxHistory>10</maxHistory>
</ai.chat>
<respond/>
</inSequence>
<faultSequence>
</faultSequence>
</resource>
</api>
Step 4 - Run the Integration API¶
After developing the integration using the Micro Integrator extension for Visual Studio Code, deploy it to the Micro Integrator server runtime.
Click the Build and Run icon located in the top right corner of VS Code.
Step 5 - Test the Integration API¶
Next, test the Integration API using the built-in 'Try it' functionality in the MI for VS Code extension.
When you run the integration as in Step 4, the Runtime Services interface is opened up. You can see all the available services.
- Select Try it of the
ChatAPI
that you have developed and test the resource. - Expand the
POST /
resource and click on the Try it out button. -
Now you can enter the request payload in the Request Body field. For this tutorial, you can use the following payload:
4. Click Execute to send the request to the API.{ "userID":"001", "query":"Hi!" }
Here is the response returned by the Chat API:
{
"content": "Hello! How can I assist you today?",
"tokenUsage": {
"inputTokensDetails": {
"cachedTokens": 0
},
"outputTokensDetails": {
"reasoningTokens": 0
},
"inputTokenCount": 26,
"outputTokenCount": 9,
"totalTokenCount": 35
},
"finishReason": "STOP",
"toolExecutions": []
}
content
section of the API response. This field contains the message generated by the AI based on the user's query.
Congratulations! Now, you have created your first AI Integration API.
What's Next?¶
So far, you have used the LLM (OpenAI API in this case) to process client requests and generate responses. In the next tutorial, you will learn how to build a knowledge base. This will serve as a foundation for enhancing your chatbot's capabilities in future integrations.
Click on the Next button below to continue to the next tutorial.