In order to understand the Model Context Protocol better, I decided to build my own MCP client using TypeScript and Node.js. The official documentation provides one already, but it uses Anthropic's Claude as the LLM. Since I am using OpenAI instead, I needed to modify the code to replace Claude with ChatGPT.
In this blog post, I document the steps I took to achieve that goal.
If you're looking for the code, you can find it here in the repository.
Demo
Dependencies
The first step was to use OpenAI's API instead of Anthropic's.
npm install openai
npm remove @anthropic-ai/sdk
API Key
Inside the .env
file, we replace the existing API key:
-ANTHROPIC_API_KEY=<anthropic_api_key_value>
+OPENAI_API_KEY=<openai_api_key_value>
Setting up
The first lines of code we replace inside index.ts
are the respective imports:
-import { Anthropic } from "@anthropic-ai/sdk";
+import { OpenAI } from "openai";
-import {
- MessageParam,
- Tool,
-} from "@anthropic-ai/sdk/resources/messages/messages.mjs";
+import { EasyInputMessage, FunctionTool } from "openai/resources/responses/responses";
Check if we have the OpenAI API key defined in the .env
file:
-const ANTHROPIC_API_KEY = process.env.ANTHROPIC_API_KEY;
-if (!ANTHROPIC_API_KEY) {
- throw new Error("ANTHROPIC_API_KEY is not set");
-}
+const OPENAI_API_KEY = process.env.OPENAI_API_KEY;
+if (!OPENAI_API_KEY) {
+ throw new Error("OPENAI_API_KEY is not set");
+}
Then, inside the MCPClient
class, replace the following:
-private anthropic: Anthropic;
+private openai: OpenAI;
-private tools: Tool[] = [];
+private tools: FunctionTool[] = [];
constructor() {
- this.anthropic = new Anthropic({ apiKey: ANTHROPIC_API_KEY });
+ this.openai = new OpenAI({ apiKey: OPENAI_API_KEY });
this.mcp = new Client({ name: "mcp-client-cli", version: "1.0.0" });
}
Tools
The MCPClient
class contains an important function named connectToServer
, which—as the name implies—is responsible for connecting to the MCP server. When the connection is successful, the client will print the available tools and store them for later use.
Here, we encounter the first meaningful difference between Claude and ChatGPT—what one calls tool use, the other calls function calling. To accommodate that difference, we need to update our code:
const toolsResult = await this.mcp.listTools();
this.tools = toolsResult.tools.map((tool) => {
return {
+ type: "function",
name: tool.name,
description: tool.description,
- input_schema: tool.inputSchema,
+ parameters: tool.inputSchema,
+ strict: false
};
});
Query
Up to this point, we’ve only discussed setting up the client and the server. Now, we’ll begin communicating with the LLM:
-const messages: MessageParam[] = [
+const input: EasyInputMessage[] = [
{
role: "user",
content: query,
}
];
-const response = await this.anthropic.messages.create({
+const response = await this.openai.responses.create({
- messages,
+ input,
- model: "claude-3-5-sonnet-20241022",
+ model: "gpt-4o",
- max_tokens: 1000,
tools: this.tools
});
Here, you see the first communication with the LLM. We've taken the user’s query and included information about the available tools. Without the tools property, the LLM would reply as best it could. However, by including this property, we inform the model that it can rely on external tools if needed.
Now, let’s look at the rest of the code. I’ll avoid diff formatting here for clarity and break down the functionality.
// Process response and handle tool calls
const finalText = [];
for (const output of response.output) {
if (output.type === "message") {
finalText.push(response.output_text);
} else if (output.type === "function_call") {
// Execute tool call
const toolName = output.name;
const toolArgs = JSON.parse(output.arguments);
const result = await this.mcp.callTool({
name: toolName,
arguments: toolArgs,
});
finalText.push(`[Calling tool ${toolName} with args ${JSON.stringify(toolArgs)}]`);
// Continue conversation with tool results
input.push({
role: "user",
content: result.content.map(({ text }) => text).join(""),
});
// Get next response from OpenAI
const response = await this.openai.responses.create({
input,
model: "gpt-4o",
});
finalText.push(response.output_text);
}
}
return finalText.join("\n");
Here’s what’s happening: the response from OpenAI is an array of messages. Typically, we expect only one, since our input was a single message. However, two types of replies are possible:
- type: "message" → a regular LLM reply
- type: "function_call" → ask an external tool for information
If the response is of type "message", we just collect the final result. If it's a "function_call", we perform an MCP server call using the tool name and its arguments, then feed the result back into a second OpenAI call.
Let’s consider some example queries:
🟢 What is the capital of France?
This falls under the "message" case, and the reply would be:
Paris
🟡 How much is 45623 plus 24554?
This time, the LLM recognizes its limitation and uses the “add” tool with the two numbers. The external call returns "The sum of 45623 and 24554 is 70177." We feed that information into a second OpenAI prompt, and it replies with:
70177
Wrap-Up
After performing these changes, I successfully built the CLI client example from the documentation using OpenAI’s SDK instead of Anthropic’s. This exercise answered several questions I had about how MCP clients and servers interact.
The final flow looks like this:
- The user sends a query to the MCP client.
- The MCP client sends the query to the LLM, along with the available tools.
- The LLM replies with either a final response or a JSON schema to invoke a tool.
- The MCP client calls the tool from the MCP server using its name and arguments.
- The MCP client sends a second request to the LLM, this time including the result from the tool call.
🎉 Done! We’ve now successfully repurposed the MCP client to work with ChatGPT.