API Documentation
Use /api/v1/scrape for HTML extraction, /api/v1/screenshot for images, and /api/v1/pdf for PDFs. Authenticate with X-API-Key.
Quick start
Check service health, then make your first request.
curl https://api.scraper.dev/healthScrape
Light mode fetches raw HTML. Heavy mode renders JavaScript (SPAs) using browser rendering.
| Param | Type | Required | Description |
|---|---|---|---|
| url | string | yes | Target URL to scrape |
| render | boolean | no | Enable JavaScript rendering (heavy mode). Default: false |
| selector | string | no | CSS selector to extract specific element(s) from page |
| wait_for | string | no | CSS selector to wait for before scraping (requires render=true) |
| timeout | number | no | Request timeout in milliseconds (1000-30000). Default: 30000 |
| cache_bypass | boolean | no | Force a fresh request, bypassing cache. Default: false |
curl -X POST https://api.scraper.dev/api/v1/scrape \
-H "X-API-Key: sk_your_api_key" \
-H "Content-Type: application/json" \
-d '{"url":"https://example.com","render":false}'Screenshot
Capture viewport or full page. Supported formats: png, jpeg, webp.
| Param | Type | Required | Description |
|---|---|---|---|
| url | string | yes | Target URL |
| width | number | no | Viewport width |
| height | number | no | Viewport height |
| full_page | boolean | no | Capture full page |
| format | png | jpeg | webp | no | Image format |
| timeout | number | no | Timeout in ms (1000-30000) |
curl -X POST https://api.scraper.dev/api/v1/screenshot \
-H "X-API-Key: sk_your_api_key" \
-H "Content-Type: application/json" \
-d '{"url":"https://example.com","full_page":true,"format":"png"}' \
--output screenshot.pngPDF export
Print a fully rendered page to PDF using browser rendering.
| Param | Type | Required | Description |
|---|---|---|---|
| url | string | yes | Target URL |
| wait_for | string | no | Wait for selector before printing |
| format | A4 | Letter | Legal | A3 | A5 | no | Paper format |
| landscape | boolean | no | Landscape orientation |
| print_background | boolean | no | Include background graphics |
| scale | number | no | Scale factor (0.1-2.0) |
| timeout | number | no | Timeout in ms (1000-30000) |
curl -X POST https://api.scraper.dev/api/v1/pdf \
-H "X-API-Key: sk_your_api_key" \
-H "Content-Type: application/json" \
-d '{"url":"https://example.com","format":"A4","print_background":true}' \
--output page.pdfWebhooks
Configure a webhook URL in the dashboard to receive callbacks when requests finish. Deliveries are retried automatically and visible in Settings - Webhooks.
X-ScraperAPI-Signature header:import crypto from "node:crypto";
export function verifyScraperWebhook(options: {
signature: string;
secret: string;
payload: string;
}) {
const parts = Object.fromEntries(
options.signature.split(",").map((p) => p.trim().split("=")),
);
const t = parts.t;
const v1 = parts.v1;
if (!t || !v1) return false;
const expected = crypto
.createHmac("sha256", options.secret)
.update(t + "." + options.payload)
.digest("hex");
return crypto.timingSafeEqual(Buffer.from(v1), Buffer.from(expected));
}
Signature is HMAC SHA-256 over t.payload (timestamp in seconds plus a dot plus the raw request body).
SDKs
Use the official SDKs to avoid hand-writing HTTP calls and error handling.
Node.js
import { ScraperApiClient } from "@scraper-api/sdk-node";
const client = new ScraperApiClient({
apiKey: process.env.SCRAPER_API_KEY!,
baseUrl: "https://api.scraper.dev",
});
const res = await client.scrape({ url: "https://example.com", render: false });
console.log(res.data.title, res.meta.request_id);Python
from scraper_api_sdk import ScraperApiClient
client = ScraperApiClient(api_key="sk_...")
res = client.scrape({"url": "https://example.com", "render": False})
print(res.data["title"], res.meta["request_id"])Error format
All errors include a request id for debugging. Quota responses also include X-RateLimit-* headers. All responses include X-API-Version.
{
"success": false,
"error": {
"code": "SSRF_BLOCKED",
"message": "Access to private IP addresses is not allowed",
"request_id": "req_..."
}
}INVALID_REQUEST, UNAUTHORIZED, SSRF_BLOCKED, SELECTOR_NOT_FOUND, SCRAPE_TIMEOUT, QUOTA_EXCEEDED, BROWSER_UNAVAILABLE.