Skip to content

Server Introduction

Welcome to the McpSynergy Server. It is a backend service that implements the Model Context Protocol (MCP) and acts as an intelligent bridge connecting a Large Language Model (LLM), like OpenAI's GPT series, to your frontend application.

Its core mission is to transform users' natural language requests into rendering instructions for rich, interactive UI components on the frontend.


Core Value

Empowering the AI Allow the AI to move beyond text-based communication. By providing the AI with a set of "tools" (i.e., frontend UI components), the server enables the AI to decide when a richer interface is needed to display information or interact with the user.

Business Logic Hub The server is where the business logic behind the tools is executed. When the AI decides to call a tool (e.g., to look up user information), the server is responsible for performing the actual database queries, API calls, etc., and processing the results into the format required by the frontend component.

Dynamic and Flexible You can define new tools or modify the behavior of existing ones on the backend without altering the client-side code. This allows your AI application to iterate quickly and adapt to new business requirements.

Protocol-Standard Based on the standard Model Context Protocol (MCP), it ensures interoperability with various models and clients that adhere to the protocol.

Workflow Overview

  1. Load Tool Definitions: On startup, the server loads a schema file that describes all available frontend components (the "tools"). This is typically generated by the client project.
  2. Construct Smart Prompt: When a user's chat message is received, the server constructs a system prompt containing the user's message and the list of available tools, then sends it to the LLM.
  3. Parse AI Intent: The LLM returns its response. If it decides a tool is needed, its response will be a structured JSON object specifying the tool's name and the required parameters.
  4. Execute Tool Logic: The server parses this JSON, finds the corresponding tool handler, and executes the relevant business logic (e.g., fetching data from a database).
  5. Build Component Props: The result of the tool's logic is formatted into a JSON object that matches the props requirements of the frontend component.
  6. Send Render Command: Finally, the server sends the tool name and the props data back to the client, instructing it to render the specified UI component.

Released under the ISC License.