Developer's Guide
How to Extract JSON Data
The Challenge: Unstructured Data
Extracting structured data from the web is often a brittle, time-consuming process. You build a custom scraper, only for it to break the moment a website changes its layout. You're left maintaining complex CSS selectors and parsing messy HTML, all for what you really need: clean, predictable JSON.
The Tabstack API JSON extraction endpoint solves this. It's an intelligent service that accepts a URL and a JSON schema you provide. It then fetches the page, analyzes its content, and returns clean, structured JSON that perfectly matches your schema.
This endpoint turns any web page into a structured API, making it ideal for:
- Web scraping with a consistent, reliable data structure
- Extracting product information from e-commerce sites
- Gathering news articles and blog posts
- Monitoring competitor pricing and product changes
- Building data aggregation pipelines
- Collecting structured data for analysis or AI model training
Core Features:
- Schema-Based Extraction: You define the "what," and the API handles the "how."
- Consistent Output: The returned data is validated against your schema.
- Intelligent Parsing: Works even with complex, dynamic, or JavaScript-heavy pages.
- Built-in Caching: Improves performance for frequently accessed pages (and can be bypassed when needed).
Prerequisites
Before you can start, you'll need a few things:
- A valid Tabstack API key: Sign up at https://tabstack.ai to get your free key.
- Authentication: The endpoint uses Bearer token authentication.
- A JSON schema: This defines the data structure you want to extract. See the examples throughout this guide, and use
descriptionfields to help the AI understand what data to find.
We strongly recommend storing your API key as an environment variable to avoid hard-coding it in your scripts.
First, set your API key as an environment variable in your terminal.
export TABSTACK_API_KEY="your-api-key-here"
-
Explanation:
export TABSTACK_API_KEY=...: This command creates an environment variable namedTABSTACK_API_KEYand assigns your key to it. Our code examples (in Python, JavaScript, and curl) are configured to read this variable.
-
How to Run:
- Copy this command, replace
"your-api-key-here"with your actual API key, and run it in the terminal session where you'll be executing your scripts.
- Copy this command, replace
Your First Extraction: A Step-by-Step Guide
Let's walk through a practical example: extracting the top stories from Hacker News.
Our goal is to get a list of stories, and for each story, we want its title and points.
To do this, we will send a POST request to the https://api.tabstack.ai/v1/extract/json endpoint. The body of our request will be a JSON object containing two required properties:
"url": The page we want to scrape (https://news.ycombinator.com)."json_schema": The data structure we want back.
Here is the complete request using curl, JavaScript, and Python.
- curl
- JavaScript
- Python
This example uses curl to make a direct request from your terminal.
curl -X POST https://api.tabstack.ai/v1/extract/json \
-H "Authorization: Bearer $TABSTACK_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"url": "https://news.ycombinator.com",
"json_schema": {
"type": "object",
"properties": {
"stories": {
"type": "array",
"items": {
"type": "object",
"properties": {
"title": {"type": "string"},
"points": {"type": "number"}
}
}
}
}
}
}'
This sends a POST request to the extraction endpoint with authentication and a JSON payload. The key part is the json_schema — it defines exactly what data structure you want back. Here, we're asking for an object with a stories array, where each story has a title (string) and points (number). The API will find matching data on the page and return it in this format.
-
How to Run:
- Make sure you have set your
TABSTACK_API_KEYenvironment variable in the same terminal session. - Copy and paste the entire command into your terminal and press Enter.
- Make sure you have set your
This example uses the Tabstack TypeScript SDK.
import Tabstack from '@tabstack/sdk';
async function extractJson() {
const schema = {
type: 'object',
properties: {
stories: {
type: 'array',
items: {
type: 'object',
properties: {
title: { type: 'string' },
points: { type: 'number' }
}
}
}
}
};
const client = new Tabstack({
apiKey: process.env.TABSTACK_API_KEY
});
const result = await client.extract.json({
url: 'https://news.ycombinator.com',
json_schema: schema
});
console.log(JSON.stringify(result, null, 2));
return result;
}
// Call the function
extractJson();
The SDK handles authentication, serialization, and HTTP requests automatically. Define your schema, call client.extract.json() with the URL and schema, and get clean, structured data back.
-
How to Run:
- Install the SDK:
npm install @tabstack/sdk. - Save this code as
extract.mjs. - Make sure you have set your
TABSTACK_API_KEYenvironment variable. - Run the script from your terminal:
node extract.mjs.
- Install the SDK:
This example uses the Tabstack Python SDK.
import os
import json
from tabstack import Tabstack
def extract_json():
schema = {
'type': 'object',
'properties': {
'stories': {
'type': 'array',
'items': {
'type': 'object',
'properties': {
'title': {'type': 'string'},
'points': {'type': 'number'}
}
}
}
}
}
with Tabstack(api_key=os.environ.get("TABSTACK_API_KEY")) as client:
result = client.extract.json(
url='https://news.ycombinator.com',
json_schema=schema
)
print(json.dumps(result, indent=2))
return result
# Call the function
if __name__ == "__main__":
extract_json()
The Python SDK handles authentication, serialization, and HTTP requests automatically. Define your schema, call client.extract.json() with the URL and schema, and get clean, structured data back.
-
How to Run:
- Install the SDK:
pip install tabstack. - Save this code as
extract.py. - Make sure you have set your
TABSTACK_API_KEYenvironment variable. - Run the script from your terminal:
python extract.py.
- Install the SDK:
Understanding the Response
A successful request (after a few moments of processing) will return a 200 OK status. The response body will contain the clean, structured data you asked for.
Here’s the high-level purpose of this code block: It shows a sample successful response from the API, based on the schema we provided in the request examples.
{
"stories": [
{
"title": "New AI Model Released",
"points": 342
},
{
"title": "Database Performance Tips",
"points": 156
},
{
"title": "Understanding Distributed Systems",
"points": 89
}
]
}
The response structure exactly matches your schema—that's the power of this endpoint. Instead of parsing HTML yourself, you get clean JSON with proper types (strings and numbers, not everything as text). This data is ready to use immediately in your application.
API Parameters Reference
The request body is a JSON object with the following properties:
url (required)
- Type:
string(URI format) - Description: The fully qualified, publicly accessible URL of the web page you want to fetch and extract data from.
- Validation:
- Must be a valid URL format (e.numeric_leading_zeros_tp,
https://example.com). - Cannot access internal/private resources (e.g.,
localhost,127.0.0.1, or private IPs).
- Must be a valid URL format (e.numeric_leading_zeros_tp,
json_schema (required)
- Type:
object - Description: A JSON schema definition that describes the structure of the data you want to extract. The API will use this schema to guide its extraction and parsing.
- Tips for creating schemas:
- Best Practice: Include
descriptionfields in your schema properties to give the AI extractor context about what data to find. Start with a simple schema and refine it based on results. - Be specific about required vs. optional fields using the
requiredkeyword. - Include
descriptionfields in your properties to give the AI extractor hints about what data to find.
- Best Practice: Include
Here is a more complex schema example for a blog post:
{
"json_schema": {
"type": "object",
"properties": {
"title": {
"type": "string",
"description": "The main title of the blog post"
},
"articles": {
"type": "array",
"items": {
"type": "object",
"properties": {
"headline": {"type": "string"},
"author": {"type": "string"},
"date": {"type": "string"}
},
"required": ["headline"]
}
}
},
"required": ["title", "articles"]
}
}
This schema demonstrates two useful techniques. First, the description field helps the API distinguish between similar elements—here, finding the main title versus other headings. Second, the required array at different levels controls data quality: setting required: ["headline"] in items filters out incomplete entries, while the root-level required ensures critical fields are present or the extraction fails entirely.
nocache (optional)
- Type:
boolean - Default:
false - Description: Bypasses the cache and forces a fresh fetch and extraction of the URL.
By default, the API caches responses for a short period to improve performance and reduce redundant fetches. Setting nocache to true is useful for:
- Getting real-time data from frequently updated pages.
- Debugging extraction issues.
- Forcing a re-scrape after a page's structure has changed.
This payload demonstrates how to use the nocache parameter.
{
"url": "https://example.com/products",
"json_schema": { ... },
"nocache": true
}
Setting nocache: true forces a fresh extraction, bypassing the cache. This is useful for real-time data but will be slower since nothing can be reused from previous requests.
geo_target (optional)
- Type:
object - Description: Geotargeting parameters to make requests appear from a specific country. Useful for extracting region-specific content or pricing.
The geo_target object contains a country field with an ISO 3166-1 alpha-2 country code (e.g., "US", "GB", "JP").
{
"url": "https://shop.example.com/product/laptop",
"json_schema": { ... },
"geo_target": {
"country": "GB"
}
}
This extracts the product data as it appears to users in the United Kingdom, which may include different pricing, availability, or product variants.
When to use geotargeting:
- Regional pricing: E-commerce sites that show different prices by region
- Localized content: Pages with region-specific content or offers
- Market research: Comparing product availability across regions
- Compliance testing: Ensuring correct region-specific information is displayed
Real-World Examples
Example 1: E-commerce Product Extraction
Here, we'll extract product data from a hypothetical e-commerce category page.
This is the request payload. We are defining a schema to capture a list of products with their name, price, stock status, and rating.
{
"url": "https://shop.example.com/category/laptops",
"json_schema": {
"type": "object",
"properties": {
"products": {
"type": "array",
"items": {
"type": "object",
"properties": {
"name": {"type": "string"},
"price": {"type": "number"},
"currency": {"type": "string"},
"inStock": {"type": "boolean"},
"rating": {"type": "number"}
},
"required": ["name", "price"]
}
}
}
}
}
The schema defines a products array with properties of different types—strings, numbers, and booleans. Making name and price required prevents partial entries; products without these critical fields won't be included in the response.
This is a potential response from the API.
{
"products": [
{
"name": "Pro Laptop 15\"",
"price": 1299.99,
"currency": "USD",
"inStock": true,
"rating": 4.5
},
{
"name": "Business Ultrabook",
"price": 899.99,
"currency": "USD",
"inStock": false,
"rating": 4.2
}
]
}
Notice the API handles type conversion automatically—prices become numbers (not strings like "$899.99"), and stock status becomes a proper boolean. Optional fields like rating are included when found but omitted when missing, keeping your data clean.
Example 2: News Article Extraction
This example shows how to gather a list of articles from a news homepage.
This is the request payload. We want a list of articles, each with a title, summary, URL, and publication date.
{
"url": "https://news.example.com",
"json_schema": {
"type": "object",
"properties": {
"articles": {
"type": "array",
"items": {
"type": "object",
"properties": {
"title": {"type": "string"},
"summary": {"type": "string"},
"url": {"type": "string"},
"publishedAt": {"type": "string"},
"category": {"type": "string"}
},
"required": ["title", "url"]
}
}
}
}
}
This schema extracts article metadata including URLs. The API intelligently identifies article links on the page. Making title and url required ensures you only get complete article data.
This is a potential response from the API.
{
"articles": [
{
"title": "Global Climate Summit Reaches Agreement",
"summary": "World leaders agree on new emissions targets",
"url": "https://news.example.com/climate-summit",
"publishedAt": "2024-01-15T10:30:00Z",
"category": "Environment"
}
]
}
The API extracted the complete article data, including properly formatted dates in ISO 8601 format when available on the source page.
Example 3: Regional Pricing Comparison
This example shows how to extract pricing information from different regions using geotargeting.
- curl
- JavaScript
- Python
# Extract US pricing
curl -X POST https://api.tabstack.ai/v1/extract/json \
-H "Authorization: Bearer $TABSTACK_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"url": "https://shop.example.com/product/wireless-headphones",
"geo_target": {
"country": "US"
},
"json_schema": {
"type": "object",
"properties": {
"price": {"type": "number"},
"currency": {"type": "string"},
"inStock": {"type": "boolean"}
}
}
}'
import Tabstack from '@tabstack/sdk';
async function compareRegionalPricing(productUrl) {
const schema = {
type: 'object',
properties: {
price: { type: 'number' },
currency: { type: 'string' },
inStock: { type: 'boolean' }
}
};
const client = new Tabstack({
apiKey: process.env.TABSTACK_API_KEY
});
const regions = ['US', 'GB', 'EU'];
const prices = {};
for (const country of regions) {
const data = await client.extract.json({
url: productUrl,
json_schema: schema,
geo_target: { country }
});
prices[country] = data;
}
return prices;
}
// Usage
const result = await compareRegionalPricing(
'https://shop.example.com/product/wireless-headphones'
);
console.log('Regional pricing:', result);
// Output: { US: { price: 99.99, currency: 'USD', inStock: true }, GB: { price: 89.99, currency: 'GBP', inStock: true }, ... }
import os
from tabstack import Tabstack
def compare_regional_pricing(product_url):
schema = {
'type': 'object',
'properties': {
'price': {'type': 'number'},
'currency': {'type': 'string'},
'inStock': {'type': 'boolean'}
}
}
client = Tabstack(api_key=os.environ["TABSTACK_API_KEY"])
regions = ['US', 'GB', 'EU']
prices = {}
for country in regions:
data = client.extract.json(
url=product_url,
json_schema=schema,
geo_target={'country': country}
)
prices[country] = data
return prices
# Usage
result = compare_regional_pricing(
'https://shop.example.com/product/wireless-headphones'
)
print('Regional pricing:', result)
# Output: { 'US': { 'price': 99.99, 'currency': 'USD', 'inStock': True }, 'GB': { 'price': 89.99, 'currency': 'GBP', 'inStock': True }, ... }
This example loops through multiple regions, making a request for each with the appropriate geo_target parameter. Each request extracts the same product data but from the perspective of users in different countries, allowing you to compare pricing, availability, and other region-specific details.
Putting It to Work: Processing and Saving Data
Getting the JSON is just the first step. Here’s how you can immediately process or save that data.
Processing Extracted Data
Once you have the JSON response, you can use standard programming-language features to filter, sort, and analyze it. This example takes the product data from our e-commerce example and finds only the in-stock products, sorted by price.
- JavaScript
- Python
This code shows how to filter and sort the extracted data in JavaScript.
import Tabstack from '@tabstack/sdk';
async function processExtractedData() {
const schema = {
type: 'object',
properties: {
products: {
type: 'array',
items: {
type: 'object',
properties: {
name: { type: 'string' },
price: { type: 'number' },
inStock: { type: 'boolean' }
}
}
}
}
};
const client = new Tabstack({
apiKey: process.env.TABSTACK_API_KEY
});
const data = await client.extract.json({
url: 'https://shop.example.com/category/laptops',
json_schema: schema
});
// Filter and process the data
const availableProducts = data.products
.filter(p => p.inStock)
.sort((a, b) => a.price - b.price);
console.log('Available products (lowest price first):');
availableProducts.forEach(product => {
console.log(`${product.name}: $${product.price}`);
});
return availableProducts;
}
// Call the function
processExtractedData();
The SDK returns the extracted data directly. Use standard array methods like .filter() to get only in-stock products, then .sort() to order by price. This is standard JavaScript array manipulation—the SDK gives you clean data, you process it however you need.
-
How to Run:
- Save this code as
extract.mjsand run withnode extract.mjs.
- Save this code as
This code shows how to filter and sort the extracted data in Python.
import os
from tabstack import Tabstack
def process_extracted_data():
schema = {
'type': 'object',
'properties': {
'products': {
'type': 'array',
'items': {
'type': 'object',
'properties': {
'name': {'type': 'string'},
'price': {'type': 'number'},
'inStock': {'type': 'boolean'}
}
}
}
}
}
with Tabstack(api_key=os.environ.get("TABSTACK_API_KEY")) as client:
data = client.extract.json(
url='https://shop.example.com/category/laptops',
json_schema=schema
)
# Filter and process the data
# Use .get('inStock') to avoid errors if the key is missing
available_products = [
p for p in data['products'] if p.get('inStock')
]
# Sort the filtered list
available_products.sort(key=lambda x: x['price'])
print('Available products (lowest price first):')
for product in available_products:
print(f"{product['name']}: ${product['price']}")
return available_products
# Call the function
if __name__ == "__main__":
process_extracted_data()
The SDK returns the extracted data as a dictionary. Use list comprehension to filter and sort() with a key function to order by price. Note the use of .get('inStock') instead of direct key access—this prevents errors if a product is missing that field.
-
How to Run:
- Save and run the script:
python extract.py.
- Save and run the script:
Saving Data to Files
You can also easily save your extracted data to a file, like a JSON file for later use or a CSV file for analysis in a spreadsheet.
- JavaScript
- Python
This script fetches data and saves it to both extracted-data.json and extracted-data.csv.
import Tabstack from '@tabstack/sdk';
import { writeFile } from 'fs/promises';
async function saveExtractedData() {
const schema = {
type: 'object',
properties: {
stories: {
type: 'array',
items: {
type: 'object',
properties: {
title: { type: 'string' },
points: { type: 'number' }
}
}
}
}
};
const client = new Tabstack({
apiKey: process.env.TABSTACK_API_KEY
});
const data = await client.extract.json({
url: 'https://news.ycombinator.com',
json_schema: schema
});
try {
// Save as JSON
await writeFile('extracted-data.json', JSON.stringify(data, null, 2));
console.log('Data saved to extracted-data.json');
// Save as CSV
const csvHeader = 'Title,Points\n';
const csvRows = data.stories.map(s => {
// Ensure title is CSV-safe (e.g., contains a comma)
const title = `"${s.title.replace(/"/g, '""')}"`;
return `${title},${s.points}`;
});
const csv = [csvHeader, ...csvRows].join('\n');
await writeFile('extracted-data.csv', csv);
console.log('Data saved to extracted-data.csv');
} catch (error) {
console.error('Error saving data:', error);
}
}
// Call the function
saveExtractedData();
This example shows saving data in two formats. The SDK returns data directly, which you can save as JSON using JSON.stringify(). For CSV, the code maps each story to a CSV row, properly escaping quotes with "" to handle titles containing commas or quotes.
-
How to Run:
- Save this as
save.mjs. - Run from your terminal:
node save.mjs. - Check your directory for two new files:
extracted-data.jsonandextracted-data.csv.
- Save this as
This script fetches data and saves it using Python's built-in json and csv modules.
import os
import json
import csv
from tabstack import Tabstack
def save_extracted_data():
schema = {
'type': 'object',
'properties': {
'stories': {
'type': 'array',
'items': {
'type': 'object',
'properties': {
'title': {'type': 'string'},
'points': {'type': 'number'}
}
}
}
}
}
with Tabstack(api_key=os.environ.get("TABSTACK_API_KEY")) as client:
data = client.extract.json(
url='https://news.ycombinator.com',
json_schema=schema
)
try:
# Save as JSON
with open('extracted-data.json', 'w', encoding='utf-8') as f:
json.dump(data, f, indent=2)
print('Data saved to extracted-data.json')
# Save as CSV
with open('extracted-data.csv', 'w', newline='', encoding='utf-8') as f:
writer = csv.writer(f)
# Write the header
writer.writerow(['Title', 'Points'])
# Write the data rows
for story in data['stories']:
writer.writerow([story['title'], story['points']])
print('Data saved to extracted-data.csv')
except Exception as e:
print(f'Error saving data: {e}')
# Call the function
if __name__ == "__main__":
save_extracted_data()
Python makes file saving straightforward with built-in modules. The json.dump() function handles JSON serialization with proper formatting. For CSV, the csv.writer object handles all the formatting details—quoting, escaping, and special characters—automatically. The with blocks ensure files are properly closed even if errors occur.
-
How to Run:
- Save this as
save.py. - Run from your terminal:
python save.py. - Check your directory for
extracted-data.jsonandextracted-data.csv.
- Save this as
Error Handling
A robust application must handle potential failures. The API uses standard HTTP status codes to indicate the success or failure of a request.
Common Error Status Codes
| Status Code | Error Message | Description |
|---|---|---|
| 400 | url is required | The request body is missing the required url parameter. |
| 400 | json schema is required | The request body is missing the required json_schema parameter. |
| 400 | json schema must be a valid object | The json_schema provided is not valid. |
| 400 | invalid JSON request body | The request body itself is malformed JSON. |
| 401 | Unauthorized - Invalid token | Your Authorization header is missing or your API key is invalid. |
| 422 | url is invalid | The provided URL is malformed or cannot be processed. |
| 500 | failed to fetch URL | The server encountered an error trying to access the target URL. |
| 500 | web page is too large | The target page's content exceeds the processing size limit. |
| 500 | failed to generate JSON | The server failed to extract data matching your schema. This can happen if the page structure is vastly different from what the schema implies. |
Error Response Format
All error responses return a JSON object with a single error field.
{
"error": "json schema is required"
}
Error Handling Examples
Here’s how to build robust error handling into your application.
- JavaScript
- Python
- curl
The SDK provides specific exception classes for different error types.
import Tabstack, {
AuthenticationError,
BadRequestError,
UnprocessableEntityError,
InternalServerError,
TabstackError
} from '@tabstack/sdk';
async function extractWithErrorHandling(url, jsonSchema) {
try {
const client = new Tabstack({
apiKey: process.env.TABSTACK_API_KEY,
timeout: 30000
});
const result = await client.extract.json({
url,
json_schema: jsonSchema
});
return result;
} catch (error) {
if (error instanceof BadRequestError) {
console.error(`Bad request: ${error.message}`);
} else if (error instanceof AuthenticationError) {
console.error('Authentication failed. Check your API key.');
} else if (error instanceof UnprocessableEntityError) {
console.error(`Invalid URL: ${error.message}`);
} else if (error instanceof InternalServerError) {
console.error(`Server error: ${error.message}`);
} else if (error instanceof TabstackError) {
console.error(`API error: ${error.message}`);
} else {
console.error(`Unexpected error: ${error.message}`);
}
throw error;
}
}
// --- Usage Example ---
const schema = { type: 'object', properties: { title: { type: 'string' } } };
// Test success
extractWithErrorHandling('https://example.com', schema)
.then(data => console.log('Success:', data))
.catch(error => console.error('Failed:', error.message));
// Test failure (invalid URL)
extractWithErrorHandling('not-a-valid-url', schema)
.then(data => console.log('Success:', data))
.catch(error => console.error('Failed:', error.message));
The SDK provides specific exception classes like AuthenticationError, BadRequestError, and InternalServerError for targeted error handling. Setting a timeout on the client prevents indefinitely hanging requests. Each exception type lets you handle errors appropriately—retry server errors, fix authentication issues, or validate URLs.
-
How to Run:
- Save this as
error_handling.mjs. - Run
node error_handling.mjsto see both success and failure cases.
- Save this as
The SDK provides specific exception classes for different error types.
import os
import json
import tabstack
from tabstack import Tabstack
def extract_with_error_handling(url, json_schema):
try:
with Tabstack(
api_key=os.environ.get("TABSTACK_API_KEY"),
timeout=30.0 # Set a 30-second timeout
) as client:
result = client.extract.json(
url=url,
json_schema=json_schema
)
return result
except tabstack.BadRequestError as e:
print(f'Error: Bad request: {e}')
except tabstack.AuthenticationError:
print('Error: Authentication failed. Check your API key.')
except tabstack.UnprocessableEntityError as e:
print(f'Error: Invalid URL: {e}')
except tabstack.InternalServerError as e:
print(f'Error: Server error: {e}')
except tabstack.APITimeoutError:
print('Error: Request timed out')
except tabstack.APIConnectionError as e:
print(f'Error: Network error: {e}')
except tabstack.APIStatusError as e:
print(f'Error: API error ({e.status_code}): {e}')
return None
# --- Usage Example ---
schema = {'type': 'object', 'properties': {'title': {'type': 'string'}}}
# Test success
print("--- Testing success ---")
data = extract_with_error_handling('https://example.com', schema)
if data:
print('Success:', json.dumps(data, indent=2))
# Test failure (invalid URL)
print("\n--- Testing failure ---")
data = extract_with_error_handling('not-a-valid-url', schema)
if data is None:
print('Failure handled correctly.')
The SDK provides specific exception classes like AuthenticationError, BadRequestError, and InternalServerError for targeted error handling. Setting a timeout on the client prevents indefinitely hanging requests. Each exception type lets you handle errors appropriately—retry server errors, fix authentication issues, or validate URLs.
-
How to Run:
- Save this as
error_handling.py. - Run
python error_handling.pyto see both success and failure cases.
- Save this as
This bash script demonstrates how to check the HTTP status code when using curl.
#!/bin/bash
# -s: silent mode
# -w "\n%{http_code}": appends the http_code to the output
response=$(curl -s -w "\n%{http_code}" -X POST https://api.tabstack.ai/v1/extract/json \
-H "Authorization: Bearer $TABSTACK_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"url": "https://example.com",
"json_schema": {
"type": "object",
"properties": {
"title": {"type": "string"}
}
}
}')
# Split response body and status code
http_code=$(echo "$response" | tail -n1)
response_body=$(echo "$response" | sed '$d')
if [ "$http_code" -eq 200 ]; then
echo "Success:"
# Use 'jq' to pretty-print the JSON
echo "$response_body" | jq .
else
echo "Error (HTTP $http_code):"
# Try to parse the error with 'jq'
echo "$response_body" | jq .error
exit 1
fi
This bash script captures both the response body and HTTP status code by using curl's -w flag to append the status code. The script then splits them apart using tail and sed. It checks the status code and uses jq to format the output—either pretty-printing success responses or extracting error messages from failures.
-
How to Run:
- You may need to install
jq:sudo apt-get install jq(Linux) orbrew install jq(macOS). - Save this as
error_handling.sh. - Make it executable:
chmod +x error_handling.sh. - Run it:
./error_handling.sh.
- You may need to install
Advanced Usage Patterns
Batch Processing Multiple URLs
To extract data from multiple pages (like a list of product pages), you can loop through your URLs and call the API for each one.
Note: Please be a good web citizen. When running batch jobs, we recommend adding a small delay between requests to avoid overwhelming the API or the target server.
- JavaScript
- Python
This function loops through a list of URLs and aggregates the results.
import Tabstack, { TabstackError } from '@tabstack/sdk';
// Helper function for a respectful delay
const sleep = (ms) => new Promise(resolve => setTimeout(resolve, ms));
async function batchExtract(urls, schema) {
const results = [];
const client = new Tabstack({
apiKey: process.env.TABSTACK_API_KEY
});
for (const url of urls) {
try {
console.log(`Extracting ${url}...`);
const data = await client.extract.json({
url,
json_schema: schema
});
results.push({ url, success: true, data });
} catch (error) {
const errorMessage = error instanceof TabstackError
? error.message
: 'Unknown error';
console.error(`Failed to extract ${url}: ${errorMessage}`);
results.push({ url, success: false, error: errorMessage });
}
// Respectful rate limiting: wait 500ms between requests
await sleep(500);
}
return results;
}
// Usage
const urls = [
'https://example.com/page1',
'https://example.com/page2',
'https://example.com/page3'
];
const schema = {
type: 'object',
properties: {
title: { type: 'string' },
content: { type: 'string' }
}
};
batchExtract(urls, schema).then(results => {
const successful = results.filter(r => r.success);
console.log(`\n--- Batch Complete ---`);
console.log(`Successfully extracted ${successful.length}/${urls.length} pages.`);
});
This function loops through multiple URLs, extracting data from each with the same schema. The key detail is the rate limiting—await sleep(500) adds a 500ms delay between requests to avoid overwhelming servers. Each result (success or failure) is tracked in the results array, letting you see which extractions worked and which didn't.
-
How to Run:
- Save as
batch.mjs. - Run
node batch.mjs.
- Save as
This function loops through a list of URLs and aggregates the results.
import os
import time
import tabstack
from tabstack import Tabstack
def batch_extract(urls, schema):
results = []
with Tabstack(api_key=os.environ.get("TABSTACK_API_KEY")) as client:
for url in urls:
try:
print(f'Extracting {url}...')
data = client.extract.json(
url=url,
json_schema=schema
)
results.append({'url': url, 'success': True, 'data': data})
except tabstack.APIStatusError as error:
print(f'Failed to extract {url}: {error.status_code} - {error}')
results.append({'url': url, 'success': False, 'error': str(error)})
except Exception as error:
print(f'Failed to extract {url}: {error}')
results.append({'url': url, 'success': False, 'error': str(error)})
# Respectful rate limiting: wait 0.5 seconds between requests
time.sleep(0.5)
return results
# Usage
if __name__ == "__main__":
urls = [
'https://example.com/page1',
'https://example.com/page2',
'https://example.com/page3'
]
schema = {
'type': 'object',
'properties': {
'title': {'type': 'string'},
'content': {'type': 'string'}
}
}
results = batch_extract(urls, schema)
successful = [r for r in results if r['success']]
print(f'\n--- Batch Complete ---')
print(f'Successfully extracted {len(successful)}/{len(urls)} pages.')
The SDK handles connection management automatically when using a context manager. Loop through URLs, extract data, and add a delay between requests. Using time.sleep(0.5) between requests is good citizenship—it prevents hitting rate limits and reduces load on target servers. Error handling ensures one failure doesn't stop the entire batch.
-
How to Run:
- Save as
batch.py. - Run
python batch.py.
- Save as
Best Practices
1. Start Simple and Iterate
Writing complex JSON schemas from scratch can be error-prone. Start with a minimal schema capturing just the essential fields, test it against your target page, then gradually add more properties. Use description fields liberally to help the AI extractor understand what data you're looking for.
2. Test Schemas on Representative Pages
A schema that works for one product page might fail on another (e.g., a "product bundle" page). Before deploying to production, test your schema against a handful of representative URLs to ensure it's robust.
This script shows a simple testing harness for a schema.
# This example is in Python
def test_schema(urls, schema):
print('--- Testing schema ---')
success_count = 0
for url in urls:
try:
# Re-using the 'extract_with_error_handling' function from earlier
data = extract_with_error_handling(url, schema)
if data:
print(f'✓ {url}: Success')
success_count += 1
else:
print(f'✗ {url}: Failed extraction')
except Exception as e:
print(f'✗ {url}: {e}')
print(f'--- Test Complete: {success_count}/{len(urls)} successful ---')
# Test on multiple representative pages
test_urls = [
'https://example.com/product/1',
'https://example.com/product/2',
'https://example.com/product/on-sale'
]
# your_schema = ... (the schema you want to test)
# test_schema(test_urls, your_schema)
This simple test harness validates your schema against multiple URLs. It gives you a quick pass/fail report, helping you identify edge cases before production. Testing against varied page types (regular products, sale items, bundles) reveals schema weaknesses early.
3. Handle Missing or Null Data
Web pages are unreliable. A "rating" field might not exist for a new product. To prevent your application from crashing, design your schemas and code to handle missing data.
This schema demonstrates how to define optional and nullable fields.
{
"type": "object",
"properties": {
"title": {
"type": "string",
"description": "Product title"
},
"price": {
"type": ["number", "null"],
"description": "Price (may be null if 'Call for Price')"
},
"rating": {
"type": "number",
"description": "Customer rating (optional)"
}
},
"required": ["title"]
}
Using "type": ["number", "null"] allows a field to be either a number or null—useful for prices that might show "Call for quote." Fields not in the required array are optional; they'll be omitted if not found. Only fields in required must exist, or the extraction fails for that item.
4. Use Caching Strategically
- Default (Cached): For most use cases, like scraping articles or products that don't change every second, our default caching is ideal. It's fast and reduces load.
nocache: true: Only use this when you absolutely need real-time data, such as for monitoring stock prices, or when you are actively debugging a schema.
5. Validate Extracted Data
Don't trust, verify. Even if the API successfully returns data, add a layer of validation in your own application before using it.
This JavaScript snippet shows a basic post-extraction validation check.
// ... after you get the 'data' from the API ...
// const data = await extractAndValidate(url, schema);
if (!data.products || data.products.length === 0) {
throw new Error('Validation failed: No products array found');
}
// Check data quality
const invalidProducts = data.products.filter(p => !p.name || !p.price);
if (invalidProducts.length > 0) {
console.warn(`Warning: Found ${invalidProducts.length} products with missing name or price`);
}
// If it passed, the data is good to use
// processProducts(data.products);
This validation checks that you got the expected structure (a products array exists) and that individual items have required fields. Filtering for incomplete items helps you monitor data quality—you might get a successful API response, but some products could be missing critical information.