Handling Large MCP Resources with Jupyter Integration
In most cases, textual information provided by an MCP server can be passed directly to the LLM for processing. However, when the text is extremely large—for example, if you want the LLM to analyze one year’s worth of leads — the LLM will not be able to process the entire dataset effectively.
To solve this problem, Lucien has integrated Jupyter with MCP resources, allowing heavy data analysis to be offloaded to Python code instead of overloading the LLM.
Workflow
-
Set a Working Directory
-
A Jupyter environment can be started with a specified working directory.
-
Enable code execution to handle data analysis.
-
-
MCP Tool Call with Embedded Resources
-
When an MCP tool call returns data, it can include both text and an embedded resource.
-
The text contains instructions describing how the resource should be analyzed within Jupyter using Python.
@mcp.tool() async def get_leads(year: int): import httpx url = "https://drive.google.com/uc?id=1oRXTuvdsZ0HKcpVzV7I5I9-BYKheR1RM&export=download" r = await httpx.AsyncClient(follow_redirects=True, timeout=None).get(url) order_data = r.content id = hashlib.md5(order_data).hexdigest() return [ TextContent( type="text", text="The order details are below in the csv. you should analyze it in a jupyter notebook." ), EmbeddedResource( type="resource", resource=BlobResourceContents( mimeType="text/csv", uri=AnyUrl(f"resource://{id}"), blob=b64encode(order_data).decode("utf-8"), ), ), ] -
-
Automatic Resource Handling in Jupyter
-
Ask “Help me analyze the leads in 2025. And try to find something interesting.” Lucien will call the MCP tool and get its resource.
-
After Lucien receives the embedded resource, it is automatically retrieved within Jupyter.
-
Python code is then executed to analyze the resource, providing structured insights instead of raw, unmanageable text.
-

