API Documentation

Use /api/v1/scrape for HTML extraction, /api/v1/screenshot for images, and /api/v1/pdf for PDFs. Authenticate with X-API-Key.

Quick start

Check service health, then make your first request.

curl https://api.scraper.dev/health

Scrape

Light mode fetches raw HTML. Heavy mode renders JavaScript (SPAs) using browser rendering.

ParamTypeRequiredDescription
urlstringyesTarget URL to scrape
renderbooleannoEnable JavaScript rendering (heavy mode). Default: false
selectorstringnoCSS selector to extract specific element(s) from page
wait_forstringnoCSS selector to wait for before scraping (requires render=true)
timeoutnumbernoRequest timeout in milliseconds (1000-30000). Default: 30000
cache_bypassbooleannoForce a fresh request, bypassing cache. Default: false
curl -X POST https://api.scraper.dev/api/v1/scrape \
  -H "X-API-Key: sk_your_api_key" \
  -H "Content-Type: application/json" \
  -d '{"url":"https://example.com","render":false}'

Screenshot

Capture viewport or full page. Supported formats: png, jpeg, webp.

ParamTypeRequiredDescription
urlstringyesTarget URL
widthnumbernoViewport width
heightnumbernoViewport height
full_pagebooleannoCapture full page
formatpng | jpeg | webpnoImage format
timeoutnumbernoTimeout in ms (1000-30000)
curl -X POST https://api.scraper.dev/api/v1/screenshot \
  -H "X-API-Key: sk_your_api_key" \
  -H "Content-Type: application/json" \
  -d '{"url":"https://example.com","full_page":true,"format":"png"}' \
  --output screenshot.png

PDF export

Print a fully rendered page to PDF using browser rendering.

ParamTypeRequiredDescription
urlstringyesTarget URL
wait_forstringnoWait for selector before printing
formatA4 | Letter | Legal | A3 | A5noPaper format
landscapebooleannoLandscape orientation
print_backgroundbooleannoInclude background graphics
scalenumbernoScale factor (0.1-2.0)
timeoutnumbernoTimeout in ms (1000-30000)
curl -X POST https://api.scraper.dev/api/v1/pdf \
  -H "X-API-Key: sk_your_api_key" \
  -H "Content-Type: application/json" \
  -d '{"url":"https://example.com","format":"A4","print_background":true}' \
  --output page.pdf

Webhooks

Configure a webhook URL in the dashboard to receive callbacks when requests finish. Deliveries are retried automatically and visible in Settings - Webhooks.

Deliveries are signed. Verify the X-ScraperAPI-Signature header:
import crypto from "node:crypto";

export function verifyScraperWebhook(options: {
  signature: string;
  secret: string;
  payload: string;
}) {
  const parts = Object.fromEntries(
    options.signature.split(",").map((p) => p.trim().split("=")),
  );
  const t = parts.t;
  const v1 = parts.v1;
  if (!t || !v1) return false;

  const expected = crypto
    .createHmac("sha256", options.secret)
    .update(t + "." + options.payload)
    .digest("hex");

  return crypto.timingSafeEqual(Buffer.from(v1), Buffer.from(expected));
}

Signature is HMAC SHA-256 over t.payload (timestamp in seconds plus a dot plus the raw request body).

SDKs

Use the official SDKs to avoid hand-writing HTTP calls and error handling.

Node.js

import { ScraperApiClient } from "@scraper-api/sdk-node";

const client = new ScraperApiClient({
  apiKey: process.env.SCRAPER_API_KEY!,
  baseUrl: "https://api.scraper.dev",
});

const res = await client.scrape({ url: "https://example.com", render: false });
console.log(res.data.title, res.meta.request_id);

Python

from scraper_api_sdk import ScraperApiClient

client = ScraperApiClient(api_key="sk_...")
res = client.scrape({"url": "https://example.com", "render": False})
print(res.data["title"], res.meta["request_id"])

Error format

All errors include a request id for debugging. Quota responses also include X-RateLimit-* headers. All responses include X-API-Version.

{
  "success": false,
  "error": {
    "code": "SSRF_BLOCKED",
    "message": "Access to private IP addresses is not allowed",
    "request_id": "req_..."
  }
}
Common error codes: INVALID_REQUEST, UNAUTHORIZED, SSRF_BLOCKED, SELECTOR_NOT_FOUND, SCRAPE_TIMEOUT, QUOTA_EXCEEDED, BROWSER_UNAVAILABLE.