Skip to main content

Developer's Guide

How to Extract JSON Data

The Challenge: Unstructured Data

Extracting structured data from the web is often a brittle, time-consuming process. You build a custom scraper, only for it to break the moment a website changes its layout. You're left maintaining complex CSS selectors and parsing messy HTML, all for what you really need: clean, predictable JSON.

The TABS API JSON extraction endpoint solves this. It's an intelligent service that accepts a URL and a JSON schema you provide. It then fetches the page, analyzes its content, and returns clean, structured JSON that perfectly matches your schema.

This endpoint turns any web page into a structured API, making it ideal for:

  • Web scraping with a consistent, reliable data structure
  • Extracting product information from e-commerce sites
  • Gathering news articles and blog posts
  • Monitoring competitor pricing and product changes
  • Building data aggregation pipelines
  • Collecting structured data for analysis or AI model training

Core Features:

  • Schema-Based Extraction: You define the "what," and the API handles the "how."
  • Consistent Output: The returned data is validated against your schema.
  • Intelligent Parsing: Works even with complex, dynamic, or JavaScript-heavy pages.
  • Built-in Caching: Improves performance for frequently accessed pages (and can be bypassed when needed).

Prerequisites

Before you can start, you'll need a few things:

  1. A valid TABS API key: Sign up at https://tabstack.ai to get your free key.
  2. Authentication: The endpoint uses Bearer token authentication.
  3. A JSON schema: This defines the data structure you want to extract. You can write one by hand or generate one automatically using our JSON Schema endpoint.

We strongly recommend storing your API key as an environment variable to avoid hard-coding it in your scripts.

First, set your API key as an environment variable in your terminal.

export TABS_API_KEY="your-api-key-here"
  • Explanation:

    • export TABS_API_KEY=...: This command creates an environment variable named TABS_API_KEY and assigns your key to it. Our code examples (in Python, JavaScript, and curl) are configured to read this variable.
  • How to Run:

    • Copy this command, replace "your-api-key-here" with your actual API key, and run it in the terminal session where you'll be executing your scripts.

Your First Extraction: A Step-by-Step Guide

Let's walk through a practical example: extracting the top stories from Hacker News.

Our goal is to get a list of stories, and for each story, we want its title and points.

To do this, we will send a POST request to the https://api.tabstack.ai/v1/extract/json endpoint. The body of our request will be a JSON object containing two required properties:

  1. "url": The page we want to scrape (https://news.ycombinator.com).
  2. "json_schema": The data structure we want back.

Here is the complete request using curl, JavaScript, and Python.

This example uses curl to make a direct request from your terminal.

curl -X POST https://api.tabstack.ai/v1/extract/json \
-H "Authorization: Bearer $TABS_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"url": "https://news.ycombinator.com",
"json_schema": {
"type": "object",
"properties": {
"stories": {
"type": "array",
"items": {
"type": "object",
"properties": {
"title": {"type": "string"},
"points": {"type": "number"}
}
}
}
}
}
}'

This sends a POST request to the extraction endpoint with authentication and a JSON payload. The key part is the json_schema — it defines exactly what data structure you want back. Here, we're asking for an object with a stories array, where each story has a title (string) and points (number). The API will find matching data on the page and return it in this format.

  • How to Run:

    • Make sure you have set your TABS_API_KEY environment variable in the same terminal session.
    • Copy and paste the entire command into your terminal and press Enter.

Understanding the Response

A successful request (after a few moments of processing) will return a 200 OK status. The response body will contain the clean, structured data you asked for.

Here’s the high-level purpose of this code block: It shows a sample successful response from the API, based on the schema we provided in the request examples.

{
"stories": [
{
"title": "New AI Model Released",
"points": 342
},
{
"title": "Database Performance Tips",
"points": 156
},
{
"title": "Understanding Distributed Systems",
"points": 89
}
]
}

The response structure exactly matches your schema—that's the power of this endpoint. Instead of parsing HTML yourself, you get clean JSON with proper types (strings and numbers, not everything as text). This data is ready to use immediately in your application.


API Parameters Reference

The request body is a JSON object with the following properties:

url (required)

  • Type: string (URI format)
  • Description: The fully qualified, publicly accessible URL of the web page you want to fetch and extract data from.
  • Validation:
    • Must be a valid URL format (e.numeric_leading_zeros_tp, https://example.com).
    • Cannot access internal/private resources (e.g., localhost, 127.0.0.1, or private IPs).

json_schema (required)

  • Type: object
  • Description: A JSON schema definition that describes the structure of the data you want to extract. The API will use this schema to guide its extraction and parsing.
  • Tips for creating schemas:
    • Best Practice: Use the JSON Schema endpoint to generate a schema automatically. It's much faster and more accurate than writing one manually.
    • Be specific about required vs. optional fields using the required keyword.
    • Include description fields in your properties to give the AI extractor hints about what data to find.

Here is a more complex schema example for a blog post:

{
"json_schema": {
"type": "object",
"properties": {
"title": {
"type": "string",
"description": "The main title of the blog post"
},
"articles": {
"type": "array",
"items": {
"type": "object",
"properties": {
"headline": {"type": "string"},
"author": {"type": "string"},
"date": {"type": "string"}
},
"required": ["headline"]
}
}
},
"required": ["title", "articles"]
}
}

This schema demonstrates two useful techniques. First, the description field helps the API distinguish between similar elements—here, finding the main title versus other headings. Second, the required array at different levels controls data quality: setting required: ["headline"] in items filters out incomplete entries, while the root-level required ensures critical fields are present or the extraction fails entirely.

nocache (optional)

  • Type: boolean
  • Default: false
  • Description: Bypasses the cache and forces a fresh fetch and extraction of the URL.

By default, the API caches responses for a short period to improve performance and reduce redundant fetches. Setting nocache to true is useful for:

  • Getting real-time data from frequently updated pages.
  • Debugging extraction issues.
  • Forcing a re-scrape after a page's structure has changed.

This payload demonstrates how to use the nocache parameter.

{
"url": "https://example.com/products",
"json_schema": { ... },
"nocache": true
}

Setting nocache: true forces a fresh extraction, bypassing the cache. This is useful for real-time data but will be slower since nothing can be reused from previous requests.


Real-World Examples

Example 1: E-commerce Product Extraction

Here, we'll extract product data from a hypothetical e-commerce category page.

This is the request payload. We are defining a schema to capture a list of products with their name, price, stock status, and rating.

{
"url": "https://shop.example.com/category/laptops",
"json_schema": {
"type": "object",
"properties": {
"products": {
"type": "array",
"items": {
"type": "object",
"properties": {
"name": {"type": "string"},
"price": {"type": "number"},
"currency": {"type": "string"},
"inStock": {"type": "boolean"},
"rating": {"type": "number"}
},
"required": ["name", "price"]
}
}
}
}
}

The schema defines a products array with properties of different types—strings, numbers, and booleans. Making name and price required prevents partial entries; products without these critical fields won't be included in the response.

This is a potential response from the API.

{
"products": [
{
"name": "Pro Laptop 15\"",
"price": 1299.99,
"currency": "USD",
"inStock": true,
"rating": 4.5
},
{
"name": "Business Ultrabook",
"price": 899.99,
"currency": "USD",
"inStock": false,
"rating": 4.2
}
]
}

Notice the API handles type conversion automatically—prices become numbers (not strings like "$899.99"), and stock status becomes a proper boolean. Optional fields like rating are included when found but omitted when missing, keeping your data clean.

Example 2: News Article Extraction

This example shows how to gather a list of articles from a news homepage.

This is the request payload. We want a list of articles, each with a title, summary, URL, and publication date.

{
"url": "https://news.example.com",
"json_schema": {
"type": "object",
"properties": {
"articles": {
"type": "array",
"items": {
"type": "object",
"properties": {
"title": {"type": "string"},
"summary": {"type": "string"},
"url": {"type": "string"},
"publishedAt": {"type": "string"},
"category": {"type": "string"}
},
"required": ["title", "url"]
}
}
}
}
}

This schema extracts article metadata including URLs. The API intelligently identifies article links on the page. Making title and url required ensures you only get complete article data.

This is a potential response from the API.

{
"articles": [
{
"title": "Global Climate Summit Reaches Agreement",
"summary": "World leaders agree on new emissions targets",
"url": "https://news.example.com/climate-summit",
"publishedAt": "2024-01-15T10:30:00Z",
"category": "Environment"
}
]
}

The API extracted the complete article data, including properly formatted dates in ISO 8601 format when available on the source page.


Putting It to Work: Processing and Saving Data

Getting the JSON is just the first step. Here’s how you can immediately process or save that data.

Processing Extracted Data

Once you have the JSON response, you can use standard programming-language features to filter, sort, and analyze it. This example takes the product data from our e-commerce example and finds only the in-stock products, sorted by price.

This code shows how to filter and sort the extracted data in JavaScript.

async function processExtractedData() {
// ... [API fetch logic from "Your First Extraction" example] ...
// Assume 'data' is the JSON response:
// const data = await response.json();

const data = {
"products": [
{"name": "Pro Laptop 15\"", "price": 1299.99, "inStock": true},
{"name": "Business Ultrabook", "price": 899.99, "inStock": false},
{"name": "Gamer Rig", "price": 1799.99, "inStock": true}
]
};

// Filter and process the data
const availableProducts = data.products
.filter(p => p.inStock)
.sort((a, b) => a.price - b.price);

console.log('Available products (lowest price first):');
availableProducts.forEach(product => {
console.log(`${product.name}: $${product.price}`);
});

return availableProducts;
}

// Call the function
processExtractedData();

Once you have the extracted data (here we're using mock data for demonstration), processing it is straightforward. The example chains .filter() to get only in-stock products, then .sort() to order by price. This is standard JavaScript array manipulation—the API gives you clean data, you process it however you need.

  • How to Run:

    • You can add this logic to the extract.js script you created earlier, right after const data = await response.json().

Saving Data to Files

You can also easily save your extracted data to a file, like a JSON file for later use or a CSV file for analysis in a spreadsheet.

This script fetches data and saves it to both extracted-data.json and extracted-data.csv.

const fs = require('fs').promises; // Using the promises API of the 'fs' module

async function saveExtractedData() {
// ... [API fetch logic] ...
// const data = await response.json();

// Using Hacker News example data
const data = {
"stories": [
{"title": "New AI Model Released", "points": 342},
{"title": "Database Performance Tips", "points": 156},
{"title": "Understanding Distributed Systems", "points": 89}
]
};

try {
// Save as JSON
await fs.writeFile('extracted-data.json', JSON.stringify(data, null, 2));
console.log('Data saved to extracted-data.json');

// Save as CSV
const csvHeader = 'Title,Points\n';
const csvRows = data.stories.map(s => {
// Ensure title is CSV-safe (e.g., contains a comma)
const title = `"${s.title.replace(/"/g, '""')}"`;
return `${title},${s.points}`;
});

const csv = [csvHeader, ...csvRows].join('\n');
await fs.writeFile('extracted-data.csv', csv);
console.log('Data saved to extracted-data.csv');

} catch (error) {
console.error('Error saving data:', error);
}
}

// Call the function
saveExtractedData();

This example shows saving data in two formats. For JSON, JSON.stringify() with formatting creates a readable file. For CSV, the code maps each story to a CSV row, properly escaping quotes with "" to handle titles containing commas or quotes. The key technique here is .map() to transform objects into CSV rows, then .join('\n') to create the final file content.

  • How to Run:

    • Save this as save.js.
    • Run from your terminal: node save.js.
    • Check your directory for two new files: extracted-data.json and extracted-data.csv.

Error Handling

A robust application must handle potential failures. The API uses standard HTTP status codes to indicate the success or failure of a request.

Common Error Status Codes

Status CodeError MessageDescription
400url is requiredThe request body is missing the required url parameter.
400json schema is requiredThe request body is missing the required json_schema parameter.
400json schema must be a valid objectThe json_schema provided is not valid.
400invalid JSON request bodyThe request body itself is malformed JSON.
401Unauthorized - Invalid tokenYour Authorization header is missing or your API key is invalid.
422url is invalidThe provided URL is malformed or cannot be processed.
500failed to fetch URLThe server encountered an error trying to access the target URL.
500web page is too largeThe target page's content exceeds the processing size limit.
500failed to generate JSONThe server failed to extract data matching your schema. This can happen if the page structure is vastly different from what the schema implies.

Error Response Format

All error responses return a JSON object with a single error field.

{
"error": "json schema is required"
}

Error Handling Examples

Here’s how to build robust error handling into your application.

This function wraps the API call in a try/catch block and checks the response.ok status.

async function extractWithErrorHandling(url, jsonSchema) {
try {
const response = await fetch('https://api.tabstack.ai/v1/extract/json', {
method: 'POST',
headers: {
'Authorization': `Bearer ${process.env.TABS_API_KEY}`,
'Content-Type': 'application/json'
},
body: JSON.stringify({
url: url,
json_schema: jsonSchema
})
});

// We must parse the body to get the error message, even for non-200 responses
const data = await response.json();

if (!response.ok) {
// The request failed
const statusCode = response.status;
const errorMessage = data.error || `HTTP error ${statusCode}`;

// Handle specific error codes
switch (statusCode) {
case 400:
throw new Error(`Bad request: ${errorMessage}`);
case 401:
throw new Error('Authentication failed. Check your API key.');
case 422:
throw new Error(`Invalid URL: ${errorMessage}`);
case 500:
throw new Error(`Server error: ${errorMessage}`);
default:
throw new Error(`Request failed: ${errorMessage}`);
}
}

// If response.ok is true, we have data
return data;

} catch (error) {
// This catches network errors or errors thrown from our check
console.error('Error extracting JSON:', error.message);
throw error;
}
}

// --- Usage Example ---
const schema = {type: 'object', properties: {title: {type: 'string'}}};

// Test success
extractWithErrorHandling('https://example.com', schema)
.then(data => console.log('Success:', data))
.catch(error => console.error('Failed:', error.message));

// Test failure (missing schema)
extractWithErrorHandling('https://example.com', null)
.then(data => console.log('Success:', data))
.catch(error => console.error('Failed:', error.message));

This error handling pattern checks response.ok to detect HTTP errors before processing data. The switch statement maps status codes to meaningful error messages. Always parse the response body first—even error responses contain useful information in their error field. The try/catch wrapper catches both network failures and the errors we throw for bad status codes.

  • How to Run:

    • Save this as error_handling.js.
    • Run node error_handling.js to see both the "Success" and "Failed" example logs.

Advanced Usage Patterns

Using with Schema Generation

The most powerful TABS workflow is to combine schema generation with schema extraction. This creates a "one-shot" scraper.

  1. Step 1: Call the /extract/json/schema endpoint with a URL and plain-text instructions.
  2. Step 2: Use the json_schema from that response to call the /extract/json endpoint.

This function chains the two API calls together.

async function completeWorkflow(url, instructions) {
const apiKey = process.env.TABS_API_KEY;

// Step 1: Generate schema
console.log('Step 1: Generating schema...');
const schemaResponse = await fetch('https://api.tabstack.ai/v1/extract/json/schema', {
method: 'POST',
headers: {
'Authorization': `Bearer ${apiKey}`,
'Content-Type': 'application/json'
},
body: JSON.stringify({
url,
instructions
})
});

if (!schemaResponse.ok) throw new Error('Failed to generate schema');
const schema = await schemaResponse.json();
console.log('Generated schema:', JSON.stringify(schema, null, 2));

// Step 2: Use schema to extract data
console.log('Step 2: Extracting data with schema...');
const extractResponse = await fetch('https://api.tabstack.ai/v1/extract/json', {
method: 'POST',
headers: {
'Authorization': `Bearer ${apiKey}`,
'Content-Type': 'application/json'
},
body: JSON.stringify({
url,
json_schema: schema
})
});

if (!extractResponse.ok) throw new Error('Failed to extract data');
const data = await extractResponse.json();
console.log('Extracted data:', JSON.stringify(data, null, 2));

return { schema, data };
}

// Usage
completeWorkflow(
'https://news.ycombinator.com',
'extract top 5 stories with title, points, and author'
).catch(e => console.error(e.message));

This workflow combines schema generation and extraction into a single function. First, it generates a schema from your natural language instructions. Then it immediately uses that schema to extract data. This "one-shot" approach means you don't have to manually write schemas—just describe what you want in plain English.

  • How to Run:

    • Save this as workflow.js.
    • Run node workflow.js. You will see the two-step process log to your console.

Batch Processing Multiple URLs

To extract data from multiple pages (like a list of product pages), you can loop through your URLs and call the API for each one.

Note: Please be a good web citizen. When running batch jobs, we recommend adding a small delay between requests to avoid overwhelming the API or the target server.

This function loops through a list of URLs and aggregates the results.

// Helper function for a respectful delay
const sleep = (ms) => new Promise(resolve => setTimeout(resolve, ms));

async function batchExtract(urls, schema) {
const results = [];
const apiKey = process.env.TABS_API_KEY;

for (const url of urls) {
try {
console.log(`Extracting ${url}...`);
const response = await fetch('https://api.tabstack.ai/v1/extract/json', {
method: 'POST',
headers: {
'Authorization': `Bearer ${apiKey}`,
'Content-Type': 'application/json'
},
body: JSON.stringify({
url,
json_schema: schema
})
});

if (!response.ok) {
throw new Error(`HTTP error ${response.status}`);
}

const data = await response.json();
results.push({ url, success: true, data });

} catch (error) {
console.error(`Failed to extract ${url}: ${error.message}`);
results.push({ url, success: false, error: error.message });
}

// Respectful rate limiting: wait 500ms between requests
await sleep(500);
}

return results;
}

// Usage
const urls = [
'https://example.com/page1',
'https://example.com/page2',
'https://example.com/page3'
];

const schema = {
type: 'object',
properties: {
title: { type: 'string' },
content: { type: 'string' }
}
};

batchExtract(urls, schema).then(results => {
const successful = results.filter(r => r.success);
console.log(`\n--- Batch Complete ---`);
console.log(`Successfully extracted ${successful.length}/${urls.length} pages.`);
});

This function loops through multiple URLs, extracting data from each with the same schema. The key detail is the rate limiting—await sleep(500) adds a 500ms delay between requests to avoid overwhelming servers. Each result (success or failure) is tracked in the results array, letting you see which extractions worked and which didn't.

  • How to Run:

    • Save as batch.js.
    • Run node batch.js.

Best Practices

1. Generate Schemas, Don't Write Them

Manually writing complex JSON schemas is tedious and error-prone. Always start by using the JSON Schema endpoint to automatically generate a schema. You can then fine-tune that schema if needed.

2. Test Schemas on Representative Pages

A schema that works for one product page might fail on another (e.g., a "product bundle" page). Before deploying to production, test your schema against a handful of representative URLs to ensure it's robust.

This script shows a simple testing harness for a schema.

# This example is in Python
def test_schema(urls, schema):
print('--- Testing schema ---')
success_count = 0

for url in urls:
try:
# Re-using the 'extract_with_error_handling' function from earlier
data = extract_with_error_handling(url, schema)
if data:
print(f'✓ {url}: Success')
success_count += 1
else:
print(f'✗ {url}: Failed extraction')
except Exception as e:
print(f'✗ {url}: {e}')

print(f'--- Test Complete: {success_count}/{len(urls)} successful ---')

# Test on multiple representative pages
test_urls = [
'https://example.com/product/1',
'https://example.com/product/2',
'https://example.com/product/on-sale'
]
# your_schema = ... (the schema you want to test)
# test_schema(test_urls, your_schema)

This simple test harness validates your schema against multiple URLs. It gives you a quick pass/fail report, helping you identify edge cases before production. Testing against varied page types (regular products, sale items, bundles) reveals schema weaknesses early.

3. Handle Missing or Null Data

Web pages are unreliable. A "rating" field might not exist for a new product. To prevent your application from crashing, design your schemas and code to handle missing data.

This schema demonstrates how to define optional and nullable fields.

{
"type": "object",
"properties": {
"title": {
"type": "string",
"description": "Product title"
},
"price": {
"type": ["number", "null"],
"description": "Price (may be null if 'Call for Price')"
},
"rating": {
"type": "number",
"description": "Customer rating (optional)"
}
},
"required": ["title"]
}

Using "type": ["number", "null"] allows a field to be either a number or null—useful for prices that might show "Call for quote." Fields not in the required array are optional; they'll be omitted if not found. Only fields in required must exist, or the extraction fails for that item.

4. Use Caching Strategically

  • Default (Cached): For most use cases, like scraping articles or products that don't change every second, our default caching is ideal. It's fast and reduces load.
  • nocache: true: Only use this when you absolutely need real-time data, such as for monitoring stock prices, or when you are actively debugging a schema.

5. Validate Extracted Data

Don't trust, verify. Even if the API successfully returns data, add a layer of validation in your own application before using it.

This JavaScript snippet shows a basic post-extraction validation check.

// ... after you get the 'data' from the API ...
// const data = await extractAndValidate(url, schema);

if (!data.products || data.products.length === 0) {
throw new Error('Validation failed: No products array found');
}

// Check data quality
const invalidProducts = data.products.filter(p => !p.name || !p.price);
if (invalidProducts.length > 0) {
console.warn(`Warning: Found ${invalidProducts.length} products with missing name or price`);
}

// If it passed, the data is good to use
// processProducts(data.products);

This validation checks that you got the expected structure (a products array exists) and that individual items have required fields. Filtering for incomplete items helps you monitor data quality—you might get a successful API response, but some products could be missing critical information.