
Unstructured
official
other
Set up and interact with your unstructured data processing workflows in [Unstructured Platform](https://unstructured.io)
Unstructured API MCP Server
An MCP server implementation for interacting with the Unstructured API. This server provides tools to list sources and workflows.
Available Tools
Tool | Description |
---|---|
| Lists available sources from the Unstructured API. |
| Get detailed information about a specific source connector. |
| Create a source connector.) |
| Update an existing source connector by params. |
| Delete a source connector by source id. |
| Lists available destinations from the Unstructured API. |
| Get detailed info about a specific destination connector |
| Create a destination connector by params. |
| Update an existing destination connector by destination id. |
| Delete a destination connector by destination id. |
| Lists workflows from the Unstructured API. |
| Get detailed information about a specific workflow. |
| Create a new workflow with source, destination id, etc. |
| Run a specific workflow with workflow id |
| Update an existing workflow by params. |
| Delete a specific workflow by id. |
| Lists jobs for a specific workflow from the Unstructured API. |
| Get detailed information about a specific job by job id. |
| Delete a specific job by id. |
Below is a list of connectors the
UNS-MCP
server currently supports, please see the full list of source connectors that Unstructured platform supports here and destination list here. We are planning on adding more!Source | Destination |
---|---|
S3 | S3 |
Azure | Weaviate |
Google Drive | Pinecone |
OneDrive | AstraDB |
Salesforce | MongoDB |
Sharepoint | Neo4j |
Databricks Volumes | |
Databricks Volumes Delta Table |
To use the tool that creates/updates/deletes a connector, the credentials for that specific connector must be defined in your .env file. Below is the list of
credentials
for the connectors we support:Credential Name | Description |
---|---|
| required to run the to interact with our server. |
,
| required to create S3 connector via server, see how in documentation and here |
| required to create Weaviate vector db connector, see how in documentation |
| required to use Firecrawl tools in , sign up on Firecrawl and get an API key. |
,
| required to create Astradb connector via server, see how in documentation |
| required option 1 to create Azure connector via server, see how in documentation |
+
| required option 2 to create Azure connector via server, see how in documentation |
+
| required option 3 to create Azure connector via server, see how in documentation |
| required to create Neo4j connector via server, see how in documentation |
| required to create Mongodb connector via server, see how in documentation |
| a string value. The original server account key (follow documentation) is in json file, run in terminal to get the string value |
,
| required to create Databricks volume/delta table connector via server, see how in documentation and here |
, ,
| required to create One Drive connector via server, see how in documentation |
| required to create Pinecone vector DB connector via server, see how in documentation |
,
| required to create salesforce source connector via server, see how in documentation |
, ,
| required to create One Drive connector via server, see how in documentation |
| Used to set logging level for our , e.g. set to ERROR to get everything |
| set to true so that can confirm execution before each tool call |
| set to true so that can output request parameters for better debugging |
Firecrawl Source
Firecrawl is a web crawling API that provides two main capabilities in our MCP:
- HTML Content Retrieval: Using
to start crawl jobs andinvoke_firecrawl_crawlhtml
to monitor themcheck_crawlhtml_status
- LLM-Optimized Text Generation: Using
to generate text andinvoke_firecrawl_llmtxt
to retrieve resultscheck_llmtxt_status
How Firecrawl works:
Web Crawling Process:
- Starts with a specified URL and analyzes it to identify links
- Uses the sitemap if available; otherwise follows links found on the website
- Recursively traverses each link to discover all subpages
- Gathers content from every visited page, handling JavaScript rendering and rate limits
- Jobs can be cancelled with
if neededcancel_crawlhtml_job
- Use this if you require all the info extracted into raw HTML, Unstructured's workflow cleans it up really well :smile:
LLM Text Generation:
- After crawling, extracts clean, meaningful text content from the crawled pages
- Generates optimized text formats specifically formatted for large language models
- Results are automatically uploaded to the specified S3 location
- Note: LLM text generation jobs cannot be cancelled once started. The
function is provided for consistency but is not currently supported by the Firecrawl API.cancel_llmtxt_job
Note: A
FIRECRAWL_API_KEY
environment variable must be set to use these functions.Installation & Configuration
This guide provides step-by-step instructions to set up and configure the UNS_MCP server using Python 3.12 and the
uv
tool.Prerequisites
- Python 3.12+
for environment managementuv
- An API key from Unstructured. You can sign up and obtain your API key here.
Using uv
(Recommended)
uv
No additional installation is required when using
uvx
as it handles execution. However, if you prefer to install the package directly:uv pip install uns_mcp
Configure Claude Desktop
For integration with Claude Desktop, add the following content to your
claude_desktop_config.json
:Note: The file is located in the
~/Library/Application Support/Claude/
directory.Using
uvx
Command:{ "mcpServers": { "UNS_MCP": { "command": "uvx", "args": ["uns_mcp"], "env": { "UNSTRUCTURED_API_KEY": "<your-key>" } } } }
Alternatively, Using Python Package:
{ "mcpServers": { "UNS_MCP": { "command": "python", "args": ["-m", "uns_mcp"], "env": { "UNSTRUCTURED_API_KEY": "<your-key>" } } } }
Using Source Code
-
Clone the repository.
-
Install dependencies:
uv sync
-
Set your Unstructured API key as an environment variable. Create a .env file in the root directory with the following content:
UNSTRUCTURED_API_KEY="YOUR_KEY"
Refer to
for the configurable environment variables..env.template
You can now run the server using one of the following methods:
<details>
<summary>
Using Editable Package Installation
</summary>
Install as an editable package:
uvx pip install -e .
Update your Claude Desktop config:
{ "mcpServers": { "UNS_MCP": { "command": "uvx", "args": ["uns_mcp"] } } }
Note: Remember to point to the uvx executable in environment where you installed the package
</details>
<details>
<summary>
Using SSE Server Protocol
</summary>
Note: Not supported by Claude Desktop.
For SSE protocol, you can debug more easily by decoupling the client and server:
-
Start the server in one terminal:
uv run python uns_mcp/server.py --host 127.0.0.1 --port 8080 # or make sse-server
-
Test the server using a local client in another terminal:
uv run python minimal_client/client.py "http://127.0.0.1:8080/sse" # or make sse-client
Note: To stop the services, use
</details>
<details>
<summary>
Using Stdio Server Protocol
</summary>
Ctrl+C
on the client first, then the server.Configure Claude Desktop to use stdio:
{ "mcpServers": { "UNS_MCP": { "command": "ABSOLUTE/PATH/TO/.local/bin/uv", "args": [ "--directory", "ABSOLUTE/PATH/TO/YOUR-UNS-MCP-REPO/uns_mcp", "run", "server.py" ] } } }
Alternatively, run the local client:
</details>uv run python minimal_client/client.py uns_mcp/server.py
Additional Local Client Configuration
Configure the minimal client using environmental variables:
: Set to suppress debug outputs from the LLM, displaying clear messages for users.LOG_LEVEL="ERROR"
: Disable tool use confirmation before execution. Use with caution, especially during development, as LLM may execute expensive workflows or delete data.CONFIRM_TOOL_USE='false'
Debugging tools
Anthropic provides
MCP Inspector
tool to debug/test your MCP server. Run the following command to spin up a debugging UI. From there, you will be able to add environment variables (pointing to your local env) on the left pane. Include your personal API key there as env var. Go to tools
, you can test out the capabilities you add to the MCP server.mcp dev uns_mcp/server.py
If you need to log request call parameters to
UnstructuredClient
, set the environment variable DEBUG_API_REQUESTS=false
.
The logs are stored in a file with the format unstructured-client-{date}.log
, which can be examined to debug request call parameters to UnstructuredClient
functions.Add terminal access to minimal client
We are going to use @wonderwhy-er/desktop-commander to add terminal access to the minimal client. It is built on the MCP Filesystem Server. Be careful, as the client (also LLM) now has access to private files.
Execute the following command to install the package:
npx @wonderwhy-er/desktop-commander setup
Then start client with extra parameter:
uv run python minimal_client/client.py "http://127.0.0.1:8080/sse" "@wonderwhy-er/desktop-commander" # or make sse-client-terminal
Using subset of tools
If your client supports using only subset of tools here are the list of things you should be aware:
tool has to be loaded in the context together withupdate_workflow
tool, because it contains detailed description on how to create and configure custom node.create_workflow
Known issues
- needs to have in context the configuration of the workflow it is updating either by providing it by the user or by callingupdate_workflow
tool, as this tool doesn't work asget_workflow_info
applier, it fully replaces the workflow config.patch
CHANGELOG.md
Any new developed features/fixes/enhancements will be added to CHANGELOG.md. 0.x.x-dev pre-release format is preferred before we bump to a stable version.
Troubleshooting
- If you encounter issues with
it meansError: spawn <command> ENOENT
is not installed or visible in your PATH:<command>
- Make sure to install it and add it to your PATH.
- or provide absolute path to the command in the
field of your config. So for example replacecommand
withpython
/opt/miniconda3/bin/python
Related Servers
S
Sequential Thinking
reference
Dynamic and reflective problem-solving through thought sequences
View Details