ScrapeforLLM Docs
ScrapeforLLM Docs
Getting StartedScrape a PageScreenshotCrawl a SiteMap, Search & ExtractList & Get ScrapesError Codes

List & Get Scrapes

Retrieve your scrape history and check on individual scrape results.

List & Get Scrapes

Retrieve your past scrapes or check on the status and result of a specific scrape.


List Your Scrapes

GET /api/app/scrapes

Returns a paginated list of your scrapes, newest first.

curl "https://scrapeforllm.com/api/app/scrapes?page=1&limit=20" \
  -H "Authorization: Bearer YOUR_API_KEY"
const response = await fetch(
  "https://scrapeforllm.com/api/app/scrapes?page=1&limit=20",
  { headers: { Authorization: "Bearer YOUR_API_KEY" } }
);

const data = await response.json();
for (const scrape of data.scrapes) {
console.log(scrape.id, scrape.status, scrape.url);
}
import requests

response = requests.get(
    "https://scrapeforllm.com/api/app/scrapes",
    headers={"Authorization": "Bearer YOUR_API_KEY"},
    params={"page": 1, "limit": 20},
)

data = response.json()
for scrape in data["scrapes"]:
    print(scrape["id"], scrape["status"], scrape["url"])

Query Parameters

ParameterTypeDefaultDescription
pagenumber1Page number (starts at 1)
limitnumber20Items per page (max 100)

Response

{
  "scrapes": [
    {
      "id": "550e8400-e29b-41d4-a716-446655440000",
      "url": "https://example.com",
      "type": "scrape",
      "status": "completed",
      "creditsUsed": 1,
      "pagesScraped": null,
      "format": "markdown",
      "firecrawlJobId": null,
      "createdAt": "2025-01-15T10:30:00.000Z",
      "updatedAt": "2025-01-15T10:30:02.000Z",
      "completedAt": "2025-01-15T10:30:02.000Z"
    }
  ],
  "pagination": {
    "page": 1,
    "limit": 20,
    "total": 142,
    "totalPages": 8
  }
}

Get a Single Scrape

GET /api/app/scrapes/:id

Returns the full scrape details including the result data.

curl https://scrapeforllm.com/api/app/scrapes/YOUR_SCRAPE_ID \
  -H "Authorization: Bearer YOUR_API_KEY"
const response = await fetch(
  `https://scrapeforllm.com/api/app/scrapes/${scrapeId}`,
  { headers: { Authorization: "Bearer YOUR_API_KEY" } }
);

const data = await response.json();
console.log(data.scrape.status);
console.log(data.scrape.result);
import requests

response = requests.get(
    f"https://scrapeforllm.com/api/app/scrapes/{scrape_id}",
    headers={"Authorization": "Bearer YOUR_API_KEY"},
)

data = response.json()
print(data["scrape"]["status"])
print(data["scrape"]["result"])

Response

For completed scrapes, the result field contains the scraped data. For crawls still in progress, a progress object is included:

{
  "scrape": {
    "id": "550e8400-e29b-41d4-a716-446655440000",
    "status": "processing"
  },
  "progress": {
    "completed": 15,
    "total": 42,
    "percentage": 35,
    "partialPages": [
      {
        "title": "Introduction",
        "sourceURL": "https://docs.example.com/intro",
        "markdown": "# Introduction\n\nFirst 500 characters...",
        "statusCode": 200
      }
    ]
  }
}

Polling

For crawl jobs, use this endpoint to poll for progress. See the Crawl page for complete polling examples in JavaScript and Python.

Map, Search & Extract

Discover URLs, search the web, and extract structured data with AI.

Error Codes

Every error the API can return and how to handle them.

On this page

List & Get ScrapesList Your ScrapesQuery ParametersResponseGet a Single ScrapeResponse