Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Validation Error with create_entities: Missing content Field in Memory Server Response #141

Closed
fatsandfreedom opened this issue Nov 30, 2024 · 3 comments
Labels
bug Something isn't working

Comments

@fatsandfreedom
Copy link

Description

Hello, I am encountering a validation error when interacting with the MCP Memory Server using the create_entities tool. The error indicates that the response from the Memory Server is missing the required content field.

The issue arises when calling the create_entities tool through a Python client. Despite the Memory Server returning what seems to be a valid response, the client throws a validation error, stating that the content field is missing.

Error Output

Here’s the error output from my debug script:

Knowledge Graph MCP Server running on stdio
Error calling Memory Server: 1 validation error for CallToolResult
content
  Field required [type=missing, input_value={'toolResult': [{'name': ... ['Test observation']}]}, input_type=dict]
    For further information visit https://errors.pydantic.dev/2.10/v/missing

Steps to Reproduce

    1. Run the following Python debug script to test the create_entities tool:
import asyncio
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client

async def test_memory_server():
    server_params = StdioServerParameters(
        command="npx",
        args=[
            "-y",
            "@modelcontextprotocol/server-memory"
        ]
    )

    async with stdio_client(server_params) as (read, write):
        async with ClientSession(read, write) as session:
            await session.initialize()

            # Example call to create_entities
            command = "create_entities"
            parameters = {
                "entities": [
                    {
                        "name": "Test_Entity",
                        "entityType": "person",
                        "observations": ["Test observation"]
                    }
                ]
            }

            try:
                result = await session.call_tool(command, parameters)
                print(f"Memory Server result for '{command}': {result}")
            except Exception as e:
                print(f"Error calling Memory Server: {e}")

asyncio.run(test_memory_server())
    1. Observe the error output described above.

What I Expected

I expected the create_entities tool to return a successful response with the following structure:

{
  "content": {
    "toolResult": [
      {
        "name": "Test_Entity",
        "entityType": "person",
        "observations": ["Test observation"]
      }
    ]
  }
}

What I Observed

The actual response from the Memory Server was missing the content field. Instead, it returned:

{
  "toolResult": [
    {
      "name": "Test_Entity",
      "entityType": "person",
      "observations": ["Test observation"]
    }
  ]
}

This mismatch causes the client to throw a validation error.

Environment

Python Version: 3.10
Operating System: Ubuntu 20.04

@fatsandfreedom fatsandfreedom added the bug Something isn't working label Nov 30, 2024
@ArsenicBismuth
Copy link

I also have this issue for memory & puppeteer servers, but not with Brave server. So, it seems server implementation issue?

If successful, my personal AI is capable of calling the tool & reacting to the tool response. But for both memory & puppeteer servers, it stops right after invoking the tool.

@fatsandfreedom
Copy link
Author

I also have this issue for memory & puppeteer servers, but not with Brave server. So, it seems server implementation issue?

If successful, my personal AI is capable of calling the tool & reacting to the tool response. But for both memory & puppeteer servers, it stops right after invoking the tool.

In memory server, I used a workaround to address this issue.

async def safe_call_tool(session, command, parameters):
    """Safely call an MCP tool and handle validation errors."""
    try:
        result = await session.call_tool(command, parameters)
        return result
    except ValidationError as e:
        # Extract the actual tool result from the validation error
        raw_data = e.errors() if hasattr(e, 'errors') else None
        if raw_data and len(raw_data) > 0:
            tool_result = raw_data[0]['input']['toolResult']
            return {"toolResult": tool_result}
        # Re-raise the exception if the result cannot be extracted
        raise

@jspahrsummers
Copy link
Member

jspahrsummers commented Dec 3, 2024

Thanks for flagging! I believe #176 fixes this issue, but please let us know if not.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants