Firecrawl
official
other
Extract web data with [Firecrawl](https://firecrawl.dev)
Firecrawl MCP Server
A Model Context Protocol (MCP) server implementation that integrates with Firecrawl for web scraping capabilities.
Big thanks to @vrknetha, @cawstudios for the initial implementation!You can also play around with our MCP Server on MCP.so's playground or on Klavis AI. Thanks to MCP.so and Klavis AI for hosting and @gstarwd and @xiangkaiz for integrating our server.
Features
- Scrape, crawl, search, extract, deep research and batch scrape support
- Web scraping with JS rendering
- URL discovery and crawling
- Web search with content extraction
- Automatic retries with exponential backoff
- Efficient batch processing with built-in rate limiting
- Credit usage monitoring for cloud API
- Comprehensive logging system
- Support for cloud and self-hosted Firecrawl instances
- Mobile/Desktop viewport support
- Smart content filtering with tag inclusion/exclusion
Installation
Running with npx
env FIRECRAWL_API_KEY=fc-YOUR_API_KEY npx -y firecrawl-mcp
Manual Installation
npm install -g firecrawl-mcp
Running on Cursor
Configuring Cursor 🖥️
Note: Requires Cursor version 0.45.6+
For the most up-to-date configuration instructions, please refer to the official Cursor documentation on configuring MCP servers:
Cursor MCP Server Configuration Guide
To configure Firecrawl MCP in Cursor v0.45.6
- Open Cursor Settings
- Go to Features > MCP Servers
- Click "+ Add New MCP Server"
- Enter the following:
- Name: "firecrawl-mcp" (or your preferred name)
- Type: "command"
- Command:
env FIRECRAWL_API_KEY=your-api-key npx -y firecrawl-mcp
To configure Firecrawl MCP in Cursor v0.48.6
- Open Cursor Settings
- Go to Features > MCP Servers
- Click "+ Add new global MCP server"
- Enter the following code:
{ "mcpServers": { "firecrawl-mcp": { "command": "npx", "args": ["-y", "firecrawl-mcp"], "env": { "FIRECRAWL_API_KEY": "YOUR-API-KEY" } } } }
If you are using Windows and are running into issues, trycmd /c "set FIRECRAWL_API_KEY=your-api-key && npx -y firecrawl-mcp"
Replace
your-api-key
with your Firecrawl API key. If you don't have one yet, you can create an account and get it from https://www.firecrawl.dev/app/api-keysAfter adding, refresh the MCP server list to see the new tools. The Composer Agent will automatically use Firecrawl MCP when appropriate, but you can explicitly request it by describing your web scraping needs. Access the Composer via Command+L (Mac), select "Agent" next to the submit button, and enter your query.
Running on Windsurf
Add this to your
./codeium/windsurf/model_config.json
:{ "mcpServers": { "mcp-server-firecrawl": { "command": "npx", "args": ["-y", "firecrawl-mcp"], "env": { "FIRECRAWL_API_KEY": "YOUR_API_KEY" } } } }
Running with SSE Local Mode
To run the server using Server-Sent Events (SSE) locally instead of the default stdio transport:
env SSE_LOCAL=true FIRECRAWL_API_KEY=fc-YOUR_API_KEY npx -y firecrawl-mcp
Use the url: http://localhost:3000/sse
Installing via Smithery (Legacy)
To install Firecrawl for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install @mendableai/mcp-server-firecrawl --client claude
Configuration
Environment Variables
Required for Cloud API
: Your Firecrawl API keyFIRECRAWL_API_KEY
- Required when using cloud API (default)
- Optional when using self-hosted instance with
FIRECRAWL_API_URL
(Optional): Custom API endpoint for self-hosted instancesFIRECRAWL_API_URL
- Example:
https://firecrawl.your-domain.com
- If not provided, the cloud API will be used (requires API key)
- Example:
Optional Configuration
Retry Configuration
: Maximum number of retry attempts (default: 3)FIRECRAWL_RETRY_MAX_ATTEMPTS
: Initial delay in milliseconds before first retry (default: 1000)FIRECRAWL_RETRY_INITIAL_DELAY
: Maximum delay in milliseconds between retries (default: 10000)FIRECRAWL_RETRY_MAX_DELAY
: Exponential backoff multiplier (default: 2)FIRECRAWL_RETRY_BACKOFF_FACTOR
Credit Usage Monitoring
: Credit usage warning threshold (default: 1000)FIRECRAWL_CREDIT_WARNING_THRESHOLD
: Credit usage critical threshold (default: 100)FIRECRAWL_CREDIT_CRITICAL_THRESHOLD
Configuration Examples
For cloud API usage with custom retry and credit monitoring:
# Required for cloud API export FIRECRAWL_API_KEY=your-api-key # Optional retry configuration export FIRECRAWL_RETRY_MAX_ATTEMPTS=5 # Increase max retry attempts export FIRECRAWL_RETRY_INITIAL_DELAY=2000 # Start with 2s delay export FIRECRAWL_RETRY_MAX_DELAY=30000 # Maximum 30s delay export FIRECRAWL_RETRY_BACKOFF_FACTOR=3 # More aggressive backoff # Optional credit monitoring export FIRECRAWL_CREDIT_WARNING_THRESHOLD=2000 # Warning at 2000 credits export FIRECRAWL_CREDIT_CRITICAL_THRESHOLD=500 # Critical at 500 credits
For self-hosted instance:
# Required for self-hosted export FIRECRAWL_API_URL=https://firecrawl.your-domain.com # Optional authentication for self-hosted export FIRECRAWL_API_KEY=your-api-key # If your instance requires auth # Custom retry configuration export FIRECRAWL_RETRY_MAX_ATTEMPTS=10 export FIRECRAWL_RETRY_INITIAL_DELAY=500 # Start with faster retries
Usage with Claude Desktop
Add this to your
claude_desktop_config.json
:{ "mcpServers": { "mcp-server-firecrawl": { "command": "npx", "args": ["-y", "firecrawl-mcp"], "env": { "FIRECRAWL_API_KEY": "YOUR_API_KEY_HERE", "FIRECRAWL_RETRY_MAX_ATTEMPTS": "5", "FIRECRAWL_RETRY_INITIAL_DELAY": "2000", "FIRECRAWL_RETRY_MAX_DELAY": "30000", "FIRECRAWL_RETRY_BACKOFF_FACTOR": "3", "FIRECRAWL_CREDIT_WARNING_THRESHOLD": "2000", "FIRECRAWL_CREDIT_CRITICAL_THRESHOLD": "500" } } } }
System Configuration
The server includes several configurable parameters that can be set via environment variables. Here are the default values if not configured:
const CONFIG = { retry: { maxAttempts: 3, // Number of retry attempts for rate-limited requests initialDelay: 1000, // Initial delay before first retry (in milliseconds) maxDelay: 10000, // Maximum delay between retries (in milliseconds) backoffFactor: 2, // Multiplier for exponential backoff }, credit: { warningThreshold: 1000, // Warn when credit usage reaches this level criticalThreshold: 100, // Critical alert when credit usage reaches this level }, };
These configurations control:
-
Retry Behavior
- Automatically retries failed requests due to rate limits
- Uses exponential backoff to avoid overwhelming the API
- Example: With default settings, retries will be attempted at:
- 1st retry: 1 second delay
- 2nd retry: 2 seconds delay
- 3rd retry: 4 seconds delay (capped at maxDelay)
-
Credit Usage Monitoring
- Tracks API credit consumption for cloud API usage
- Provides warnings at specified thresholds
- Helps prevent unexpected service interruption
- Example: With default settings:
- Warning at 1000 credits remaining
- Critical alert at 100 credits remaining
Rate Limiting and Batch Processing
The server utilizes Firecrawl's built-in rate limiting and batch processing capabilities:
- Automatic rate limit handling with exponential backoff
- Efficient parallel processing for batch operations
- Smart request queuing and throttling
- Automatic retries for transient errors
Available Tools
1. Scrape Tool (firecrawl_scrape
)
firecrawl_scrape
Scrape content from a single URL with advanced options.
{ "name": "firecrawl_scrape", "arguments": { "url": "https://example.com", "formats": ["markdown"], "onlyMainContent": true, "waitFor": 1000, "timeout": 30000, "mobile": false, "includeTags": ["article", "main"], "excludeTags": ["nav", "footer"], "skipTlsVerification": false } }
2. Batch Scrape Tool (firecrawl_batch_scrape
)
firecrawl_batch_scrape
Scrape multiple URLs efficiently with built-in rate limiting and parallel processing.
{ "name": "firecrawl_batch_scrape", "arguments": { "urls": ["https://example1.com", "https://example2.com"], "options": { "formats": ["markdown"], "onlyMainContent": true } } }
Response includes operation ID for status checking:
{ "content": [ { "type": "text", "text": "Batch operation queued with ID: batch_1. Use firecrawl_check_batch_status to check progress." } ], "isError": false }
3. Check Batch Status (firecrawl_check_batch_status
)
firecrawl_check_batch_status
Check the status of a batch operation.
{ "name": "firecrawl_check_batch_status", "arguments": { "id": "batch_1" } }
4. Search Tool (firecrawl_search
)
firecrawl_search
Search the web and optionally extract content from search results.
{ "name": "firecrawl_search", "arguments": { "query": "your search query", "limit": 5, "lang": "en", "country": "us", "scrapeOptions": { "formats": ["markdown"], "onlyMainContent": true } } }
5. Crawl Tool (firecrawl_crawl
)
firecrawl_crawl
Start an asynchronous crawl with advanced options.
{ "name": "firecrawl_crawl", "arguments": { "url": "https://example.com", "maxDepth": 2, "limit": 100, "allowExternalLinks": false, "deduplicateSimilarURLs": true } }
6. Extract Tool (firecrawl_extract
)
firecrawl_extract
Extract structured information from web pages using LLM capabilities. Supports both cloud AI and self-hosted LLM extraction.
{ "name": "firecrawl_extract", "arguments": { "urls": ["https://example.com/page1", "https://example.com/page2"], "prompt": "Extract product information including name, price, and description", "systemPrompt": "You are a helpful assistant that extracts product information", "schema": { "type": "object", "properties": { "name": { "type": "string" }, "price": { "type": "number" }, "description": { "type": "string" } }, "required": ["name", "price"] }, "allowExternalLinks": false, "enableWebSearch": false, "includeSubdomains": false } }
Example response:
{ "content": [ { "type": "text", "text": { "name": "Example Product", "price": 99.99, "description": "This is an example product description" } } ], "isError": false }
Extract Tool Options:
: Array of URLs to extract information fromurls
: Custom prompt for the LLM extractionprompt
: System prompt to guide the LLMsystemPrompt
: JSON schema for structured data extractionschema
: Allow extraction from external linksallowExternalLinks
: Enable web search for additional contextenableWebSearch
: Include subdomains in extractionincludeSubdomains
When using a self-hosted instance, the extraction will use your configured LLM. For cloud API, it uses Firecrawl's managed LLM service.
7. Deep Research Tool (firecrawl_deep_research)
Conduct deep web research on a query using intelligent crawling, search, and LLM analysis.
{ "name": "firecrawl_deep_research", "arguments": { "query": "how does carbon capture technology work?", "maxDepth": 3, "timeLimit": 120, "maxUrls": 50 } }
Arguments:
- query (string, required): The research question or topic to explore.
- maxDepth (number, optional): Maximum recursive depth for crawling/search (default: 3).
- timeLimit (number, optional): Time limit in seconds for the research session (default: 120).
- maxUrls (number, optional): Maximum number of URLs to analyze (default: 50).
Returns:
- Final analysis generated by an LLM based on research. (data.finalAnalysis)
- May also include structured activities and sources used in the research process.
8. Generate LLMs.txt Tool (firecrawl_generate_llmstxt)
Generate a standardized llms.txt (and optionally llms-full.txt) file for a given domain. This file defines how large language models should interact with the site.
{ "name": "firecrawl_generate_llmstxt", "arguments": { "url": "https://example.com", "maxUrls": 20, "showFullText": true } }
Arguments:
- url (string, required): The base URL of the website to analyze.
- maxUrls (number, optional): Max number of URLs to include (default: 10).
- showFullText (boolean, optional): Whether to include llms-full.txt contents in the response.
Returns:
- Generated llms.txt file contents and optionally the llms-full.txt (data.llmstxt and/or data.llmsfulltxt)
Logging System
The server includes comprehensive logging:
- Operation status and progress
- Performance metrics
- Credit usage monitoring
- Rate limit tracking
- Error conditions
Example log messages:
[INFO] Firecrawl MCP Server initialized successfully [INFO] Starting scrape for URL: https://example.com [INFO] Batch operation queued with ID: batch_1 [WARNING] Credit usage has reached warning threshold [ERROR] Rate limit exceeded, retrying in 2s...
Error Handling
The server provides robust error handling:
- Automatic retries for transient errors
- Rate limit handling with backoff
- Detailed error messages
- Credit usage warnings
- Network resilience
Example error response:
{ "content": [ { "type": "text", "text": "Error: Rate limit exceeded. Retrying in 2 seconds..." } ], "isError": true }
Development
# Install dependencies npm install # Build npm run build # Run tests npm test
Contributing
- Fork the repository
- Create your feature branch
- Run tests:
npm test
- Submit a pull request
License
MIT License - see LICENSE file for details
Related Servers
S
Sequential Thinking
reference
Dynamic and reflective problem-solving through thought sequences
View Details