Skip to content
Get started

Research

POST/research

Execute AI-powered research queries that search the web, analyze sources, and synthesize comprehensive answers. This endpoint always streams responses using Server-Sent Events (SSE).

Streaming Response:

  • All responses are streamed using Server-Sent Events (text/event-stream)
  • Real-time progress updates as research progresses through phases

Research Modes:

  • fast - Quick answers with minimal web searches (default)
  • balanced - Standard research with multiple iterations

Use Cases:

  • Answering complex questions with cited sources
  • Synthesizing information from multiple web sources
  • Research reports on specific topics
  • Fact-checking and verification tasks
Body ParametersJSONExpand Collapse
query: string

The research query or question to answer. Maximum 10,000 characters.

maxLength10000
fetch_timeout: optional number

Timeout in seconds for fetching web pages

mode: optional "fast" or "balanced"

Research mode: fast (quick answers, default), balanced (standard research)

One of the following:
"fast"
"balanced"
nocache: optional boolean

Skip cache and force fresh research

ReturnsExpand Collapse
ResearchEvent = object { data, event } or object { data, event } or object { data, event } or 20 more

A Server-Sent Event from /v1/research. Typed discriminated union keyed on event.

One of the following:
AnalyzingEnd object { data, event }

Envelope for the “analyzing:end” event from /v1/research.

data: object { analyzed, failed, iteration, 3 more }
analyzed: number
failed: number
iteration: number
message: string
samples: array of object { domain, title, url, 4 more }
domain: string
title: string
url: string
urlSource: "user-input" or "search-result" or "extracted-link"

URL source tracking - where a URL came from

One of the following:
"user-input"
"search-result"
"extracted-link"
relevance: optional "low" or "medium" or "high"
One of the following:
"low"
"medium"
"high"
reliability: optional "low" or "medium" or "high"
One of the following:
"low"
"medium"
"high"
summary: optional string
timestamp: number
event: "analyzing:end"
AnalyzingStart object { data, event }

Envelope for the “analyzing:start” event from /v1/research.

data: object { iteration, message, pageCount, timestamp }
iteration: number
message: string
pageCount: number
timestamp: number
event: "analyzing:start"
Complete object { data, event }

Envelope for the “complete” event from /v1/research.

data: object { message, metadata, report, timestamp }

complete - Research finished successfully

message: string
metadata: object { executedQueries, mode, prompt, 11 more }

Research metadata

Note: citedPages, gapEvaluations, outline, and judgments are optional to support fast mode, which skips these phases for maximum speed.

executedQueries: array of array of string
mode: "fast" or "balanced" or "deep" or 2 more

Research mode determines depth, thinking budget, and quality controls

Modes (in order of cost/thoroughness):

  • fast: Quick answers with minimal validation (~$2, 1 iteration, no judge)
  • balanced: Standard research with moderate depth (~$8, 3 iterations, Flash models, no judge)
  • deep: Thorough research with judge review (~$15, 5 iterations, Flash models, with judge)
  • max: Maximum quality with Pro models (~$40, 5 iterations, Pro models, with judge)
  • ultra: Ultimate tier - all Pro models, 10 iterations (expensive, for when accuracy is paramount)
One of the following:
"fast"
"balanced"
"deep"
"max"
"ultra"
prompt: string
queryComplexity: "simple" or "moderate" or "complex"
One of the following:
"simple"
"moderate"
"complex"
researchObjective: string
researchPlan: string
researchQuestions: array of string
totalPagesAnalyzed: number

Total pages analyzed across all iterations

citedPages: optional array of object { id, claims, sourceQueries, 9 more }

Pages cited in the report, ordered by first citation appearance

id: string
claims: array of string
sourceQueries: array of string
url: string
depth: optional number
fullText: optional string

Full page text (fetched markdown or search excerpts). Only populated when includeFullText: true in ResearchOptions.

  • Fast mode: Parallel API excerpts (~5000 chars)
  • Other modes: Fetched page markdown
parentUrl: optional string
relevance: optional "low" or "medium" or "high"
One of the following:
"low"
"medium"
"high"
reliability: optional "low" or "medium" or "high"
One of the following:
"low"
"medium"
"high"
summary: optional string

LLM-generated summary. Undefined in fast mode (no content analysis).

title: optional string
urlSource: optional "user-input" or "search-result" or "extracted-link"

URL source tracking - where a URL came from

One of the following:
"user-input"
"search-result"
"extracted-link"
gapEvaluations: optional array of object { gapDescription, questionAssessments, researchCoverage, 3 more }
gapDescription: string

Based on unanswered/partial questions, what specific information is still needed?

questionAssessments: array of object { findings, question, status }

Assessment of each research question’s status and findings

findings: string

What we learned (if answered/partial) or what’s missing (if unanswered)

question: string

The research question being assessed

status: "answered" or "partial" or "unanswered"

Status: answered (clear info), partial (some info, gaps remain), unanswered (no relevant info)

One of the following:
"answered"
"partial"
"unanswered"
researchCoverage: "Light" or "Moderate" or "Solid" or "Comprehensive"

Research coverage level - assesses quality across all questions.

Hierarchy: Light < Moderate < Solid < Comprehensive

  • Light: Basic info on some questions, most need more depth → Continue
  • Moderate: Multiple questions answered, some remain partial → Continue
  • Solid: Most questions well-answered with validated sources → Sufficient to stop
  • Comprehensive: All questions thoroughly answered, exceptional depth → Definitely stop
One of the following:
"Light"
"Moderate"
"Solid"
"Comprehensive"
shouldContinueResearch: boolean

Explicit decision: should research continue with another iteration?

  • Considers: how many questions unanswered/partial, coverage for mode, remaining iterations
  • Drives query generation: true → generate queries, false → stop researching
newResearchQuestions: optional array of string

New research questions to add (optional, use sparingly)

  • Only if original decomposition missed something critical
  • Maximum 2-3 new questions total across all iterations
  • Most iterations should return empty array or omit this field
searchQueries: optional array of string

Search queries to address identified gaps (only when shouldContinueResearch is true)

  • Target unanswered questions first, then partial questions
  • 3-10 targeted queries if shouldContinueResearch is true
  • Omit or provide empty array if shouldContinueResearch is false
judgments: optional array of object { approved, observation, score, feedback }
approved: boolean
observation: string
score: number
feedback: optional string
metrics: optional object { cachedFetches, cachedSearches, fetches, 7 more }

Complete research metrics

cachedFetches: number

Cached fetch count (subset of fetches)

cachedSearches: map[number]

Cached search count by provider name (subset of searches)

fetches: number

Fetch count (number of pages fetched)

iterations: number

Number of research iterations performed

phases: map[object { duration } ]

Phase timings with duration in milliseconds

duration: number
robotsBlocked: number

Number of URLs blocked by robots.txt

searches: map[number]

Search count by provider name (e.g., “bright-data”, “parallel”)

successRates: object { analyzes, fetches, searches }

Success rate metrics

analyzes: number
fetches: number
searches: number
tokens: map[object { input, output } ]

Token usage by model ID (e.g., “gemini-2.5-flash”)

input: number
output: number
totalDuration: number

Total duration in milliseconds

outline: optional object { directAnswer, keyTakeaways, outline, relevantSourceIds }

Report outline from research writer

directAnswer: string
keyTakeaways: array of string
outline: string
relevantSourceIds: array of string
urlSources: optional object { extractedLinks, searchResults, userProvided }
searchResults: number
userProvided: number
report: string
timestamp: number
event: "complete"
Error object { data, event }

Envelope for the “error” event from /v1/research.

data: object { error, message, timestamp, 2 more }

error - Research failed

error: object { message, name, stack }
message: string
name: string
stack: optional string
message: string
timestamp: number
activity: optional "prefetching" or "planning" or "iteration" or 7 more

Activity types for research workflow

One of the following:
"prefetching"
"planning"
"iteration"
"searching"
"analyzing"
"following"
"evaluating"
"outlining"
"writing"
"judging"
iteration: optional number
event: "error"
EvaluatingEnd object { data, event }

Envelope for the “evaluating:end” event from /v1/research.

data: object { coverage, gaps, iteration, 5 more }
coverage: "Light" or "Moderate" or "Solid" or "Comprehensive"
One of the following:
"Light"
"Moderate"
"Solid"
"Comprehensive"
gaps: string
iteration: number
message: string
nextQueries: array of string
questionAssessments: array of object { findings, question, status }
findings: string

What we learned (if answered/partial) or what’s missing (if unanswered)

question: string

The research question being assessed

status: "answered" or "partial" or "unanswered"

Status: answered (clear info), partial (some info, gaps remain), unanswered (no relevant info)

One of the following:
"answered"
"partial"
"unanswered"
shouldContinue: boolean
timestamp: number
event: "evaluating:end"
EvaluatingStart object { data, event }

Envelope for the “evaluating:start” event from /v1/research.

data: object { iteration, message, pagesAnalyzed, 2 more }
iteration: number
message: string
pagesAnalyzed: number

Total pages analyzed so far (including this iteration)

questionCount: number

Number of research questions being assessed

timestamp: number
event: "evaluating:start"
FollowingEnd object { data, event }

Envelope for the “following:end” event from /v1/research.

data: object { failed, followed, iteration, 3 more }
failed: number
followed: number
iteration: number
message: string
samples: array of object { domain, title, url, 4 more }
domain: string
title: string
url: string
urlSource: "user-input" or "search-result" or "extracted-link"

URL source tracking - where a URL came from

One of the following:
"user-input"
"search-result"
"extracted-link"
relevance: optional "low" or "medium" or "high"
One of the following:
"low"
"medium"
"high"
reliability: optional "low" or "medium" or "high"
One of the following:
"low"
"medium"
"high"
summary: optional string
timestamp: number
event: "following:end"
FollowingStart object { data, event }

Envelope for the “following:start” event from /v1/research.

data: object { iteration, linkCount, message, timestamp }
iteration: number
linkCount: number
message: string
timestamp: number
event: "following:start"
IterationEnd object { data, event }

Envelope for the “iteration:end” event from /v1/research.

data: object { isLast, iteration, message, 2 more }
isLast: boolean

Whether this is the final iteration

iteration: number
message: string
timestamp: number
stopReason: optional "max_iterations" or "coverage_sufficient"

Why research iterations stopped (only present when isLast is true)

One of the following:
"max_iterations"
"coverage_sufficient"
event: "iteration:end"
IterationStart object { data, event }

Envelope for the “iteration:start” event from /v1/research.

data: object { iteration, maxIterations, message, 2 more }
iteration: number
maxIterations: number

Maximum iterations for this research mode

message: string
queries: array of string

Search queries to execute in this iteration

timestamp: number
event: "iteration:start"
JudgingEnd object { data, event }

Envelope for the “judging:end” event from /v1/research.

data: object { approved, attempt, message, 3 more }
approved: boolean
attempt: number
message: string
score: number
timestamp: number
feedback: optional string
event: "judging:end"
JudgingStart object { data, event }

Envelope for the “judging:start” event from /v1/research.

data: object { attempt, maxAttempts, message, timestamp }
attempt: number
maxAttempts: number

Maximum attempts allowed (1 + maxRevisions)

message: string
timestamp: number
event: "judging:start"
OutliningEnd object { data, event }

Envelope for the “outlining:end” event from /v1/research.

data: object { message, sourcesSelected, timestamp }
message: string
sourcesSelected: number
timestamp: number
event: "outlining:end"
OutliningStart object { data, event }

Envelope for the “outlining:start” event from /v1/research.

data: object { message, pagesAnalyzed, qualityPageCount, timestamp }
message: string
pagesAnalyzed: number

Total pages analyzed across all iterations

qualityPageCount: number

Pages that meet quality threshold (medium+ relevance and reliability)

timestamp: number
event: "outlining:start"
PlanningEnd object { data, event }

Envelope for the “planning:end” event from /v1/research.

data: object { complexity, message, objective, 4 more }
complexity: "simple" or "moderate" or "complex"
One of the following:
"simple"
"moderate"
"complex"
message: string
objective: string
plan: string
queries: array of string
questions: array of string
timestamp: number
event: "planning:end"
PlanningStart object { data, event }

Envelope for the “planning:start” event from /v1/research.

data: object { hasPrefetchedContext, message, timestamp }
hasPrefetchedContext: boolean

Whether prefetched user-provided URLs exist for context

message: string
timestamp: number
event: "planning:start"
PrefetchingEnd object { data, event }

Envelope for the “prefetching:end” event from /v1/research.

data: object { failed, fetched, message, timestamp }
failed: number
fetched: number
message: string
timestamp: number
event: "prefetching:end"
PrefetchingStart object { data, event }

Envelope for the “prefetching:start” event from /v1/research.

data: object { message, timestamp, urlCount, urls }
message: string
timestamp: number
urlCount: number
urls: array of string
event: "prefetching:start"
SearchingEnd object { data, event }

Envelope for the “searching:end” event from /v1/research.

data: object { iteration, message, timestamp, 2 more }
iteration: number
message: string
timestamp: number
urlsFound: number
urlsNew: number
event: "searching:end"
SearchingStart object { data, event }

Envelope for the “searching:start” event from /v1/research.

data: object { iteration, message, queries, timestamp }
iteration: number
message: string
queries: array of string
timestamp: number
event: "searching:start"
Start object { data, event }

Envelope for the “start” event from /v1/research.

data: object { message, timestamp }

start - Research begins

message: string
timestamp: number
event: "start"
WritingEnd object { data, event }

Envelope for the “writing:end” event from /v1/research.

data: object { attempt, message, timestamp }
attempt: number
message: string
timestamp: number
event: "writing:end"
WritingStart object { data, event }

Envelope for the “writing:start” event from /v1/research.

data: object { attempt, isRevision, maxAttempts, 3 more }
attempt: number
isRevision: boolean

Whether this is a revision attempt (attempt > 1)

maxAttempts: number

Maximum attempts allowed (1 + maxRevisions)

message: string
timestamp: number
previousScore: optional number

Previous judgment score if this is a revision

event: "writing:start"
ResearchEvent = object { data, event } or object { data, event } or object { data, event } or 20 more

A Server-Sent Event from /v1/research. Typed discriminated union keyed on event.

One of the following:
AnalyzingEnd object { data, event }

Envelope for the “analyzing:end” event from /v1/research.

data: object { analyzed, failed, iteration, 3 more }
analyzed: number
failed: number
iteration: number
message: string
samples: array of object { domain, title, url, 4 more }
domain: string
title: string
url: string
urlSource: "user-input" or "search-result" or "extracted-link"

URL source tracking - where a URL came from

One of the following:
"user-input"
"search-result"
"extracted-link"
relevance: optional "low" or "medium" or "high"
One of the following:
"low"
"medium"
"high"
reliability: optional "low" or "medium" or "high"
One of the following:
"low"
"medium"
"high"
summary: optional string
timestamp: number
event: "analyzing:end"
AnalyzingStart object { data, event }

Envelope for the “analyzing:start” event from /v1/research.

data: object { iteration, message, pageCount, timestamp }
iteration: number
message: string
pageCount: number
timestamp: number
event: "analyzing:start"
Complete object { data, event }

Envelope for the “complete” event from /v1/research.

data: object { message, metadata, report, timestamp }

complete - Research finished successfully

message: string
metadata: object { executedQueries, mode, prompt, 11 more }

Research metadata

Note: citedPages, gapEvaluations, outline, and judgments are optional to support fast mode, which skips these phases for maximum speed.

executedQueries: array of array of string
mode: "fast" or "balanced" or "deep" or 2 more

Research mode determines depth, thinking budget, and quality controls

Modes (in order of cost/thoroughness):

  • fast: Quick answers with minimal validation (~$2, 1 iteration, no judge)
  • balanced: Standard research with moderate depth (~$8, 3 iterations, Flash models, no judge)
  • deep: Thorough research with judge review (~$15, 5 iterations, Flash models, with judge)
  • max: Maximum quality with Pro models (~$40, 5 iterations, Pro models, with judge)
  • ultra: Ultimate tier - all Pro models, 10 iterations (expensive, for when accuracy is paramount)
One of the following:
"fast"
"balanced"
"deep"
"max"
"ultra"
prompt: string
queryComplexity: "simple" or "moderate" or "complex"
One of the following:
"simple"
"moderate"
"complex"
researchObjective: string
researchPlan: string
researchQuestions: array of string
totalPagesAnalyzed: number

Total pages analyzed across all iterations

citedPages: optional array of object { id, claims, sourceQueries, 9 more }

Pages cited in the report, ordered by first citation appearance

id: string
claims: array of string
sourceQueries: array of string
url: string
depth: optional number
fullText: optional string

Full page text (fetched markdown or search excerpts). Only populated when includeFullText: true in ResearchOptions.

  • Fast mode: Parallel API excerpts (~5000 chars)
  • Other modes: Fetched page markdown
parentUrl: optional string
relevance: optional "low" or "medium" or "high"
One of the following:
"low"
"medium"
"high"
reliability: optional "low" or "medium" or "high"
One of the following:
"low"
"medium"
"high"
summary: optional string

LLM-generated summary. Undefined in fast mode (no content analysis).

title: optional string
urlSource: optional "user-input" or "search-result" or "extracted-link"

URL source tracking - where a URL came from

One of the following:
"user-input"
"search-result"
"extracted-link"
gapEvaluations: optional array of object { gapDescription, questionAssessments, researchCoverage, 3 more }
gapDescription: string

Based on unanswered/partial questions, what specific information is still needed?

questionAssessments: array of object { findings, question, status }

Assessment of each research question’s status and findings

findings: string

What we learned (if answered/partial) or what’s missing (if unanswered)

question: string

The research question being assessed

status: "answered" or "partial" or "unanswered"

Status: answered (clear info), partial (some info, gaps remain), unanswered (no relevant info)

One of the following:
"answered"
"partial"
"unanswered"
researchCoverage: "Light" or "Moderate" or "Solid" or "Comprehensive"

Research coverage level - assesses quality across all questions.

Hierarchy: Light < Moderate < Solid < Comprehensive

  • Light: Basic info on some questions, most need more depth → Continue
  • Moderate: Multiple questions answered, some remain partial → Continue
  • Solid: Most questions well-answered with validated sources → Sufficient to stop
  • Comprehensive: All questions thoroughly answered, exceptional depth → Definitely stop
One of the following:
"Light"
"Moderate"
"Solid"
"Comprehensive"
shouldContinueResearch: boolean

Explicit decision: should research continue with another iteration?

  • Considers: how many questions unanswered/partial, coverage for mode, remaining iterations
  • Drives query generation: true → generate queries, false → stop researching
newResearchQuestions: optional array of string

New research questions to add (optional, use sparingly)

  • Only if original decomposition missed something critical
  • Maximum 2-3 new questions total across all iterations
  • Most iterations should return empty array or omit this field
searchQueries: optional array of string

Search queries to address identified gaps (only when shouldContinueResearch is true)

  • Target unanswered questions first, then partial questions
  • 3-10 targeted queries if shouldContinueResearch is true
  • Omit or provide empty array if shouldContinueResearch is false
judgments: optional array of object { approved, observation, score, feedback }
approved: boolean
observation: string
score: number
feedback: optional string
metrics: optional object { cachedFetches, cachedSearches, fetches, 7 more }

Complete research metrics

cachedFetches: number

Cached fetch count (subset of fetches)

cachedSearches: map[number]

Cached search count by provider name (subset of searches)

fetches: number

Fetch count (number of pages fetched)

iterations: number

Number of research iterations performed

phases: map[object { duration } ]

Phase timings with duration in milliseconds

duration: number
robotsBlocked: number

Number of URLs blocked by robots.txt

searches: map[number]

Search count by provider name (e.g., “bright-data”, “parallel”)

successRates: object { analyzes, fetches, searches }

Success rate metrics

analyzes: number
fetches: number
searches: number
tokens: map[object { input, output } ]

Token usage by model ID (e.g., “gemini-2.5-flash”)

input: number
output: number
totalDuration: number

Total duration in milliseconds

outline: optional object { directAnswer, keyTakeaways, outline, relevantSourceIds }

Report outline from research writer

directAnswer: string
keyTakeaways: array of string
outline: string
relevantSourceIds: array of string
urlSources: optional object { extractedLinks, searchResults, userProvided }
searchResults: number
userProvided: number
report: string
timestamp: number
event: "complete"
Error object { data, event }

Envelope for the “error” event from /v1/research.

data: object { error, message, timestamp, 2 more }

error - Research failed

error: object { message, name, stack }
message: string
name: string
stack: optional string
message: string
timestamp: number
activity: optional "prefetching" or "planning" or "iteration" or 7 more

Activity types for research workflow

One of the following:
"prefetching"
"planning"
"iteration"
"searching"
"analyzing"
"following"
"evaluating"
"outlining"
"writing"
"judging"
iteration: optional number
event: "error"
EvaluatingEnd object { data, event }

Envelope for the “evaluating:end” event from /v1/research.

data: object { coverage, gaps, iteration, 5 more }
coverage: "Light" or "Moderate" or "Solid" or "Comprehensive"
One of the following:
"Light"
"Moderate"
"Solid"
"Comprehensive"
gaps: string
iteration: number
message: string
nextQueries: array of string
questionAssessments: array of object { findings, question, status }
findings: string

What we learned (if answered/partial) or what’s missing (if unanswered)

question: string

The research question being assessed

status: "answered" or "partial" or "unanswered"

Status: answered (clear info), partial (some info, gaps remain), unanswered (no relevant info)

One of the following:
"answered"
"partial"
"unanswered"
shouldContinue: boolean
timestamp: number
event: "evaluating:end"
EvaluatingStart object { data, event }

Envelope for the “evaluating:start” event from /v1/research.

data: object { iteration, message, pagesAnalyzed, 2 more }
iteration: number
message: string
pagesAnalyzed: number

Total pages analyzed so far (including this iteration)

questionCount: number

Number of research questions being assessed

timestamp: number
event: "evaluating:start"
FollowingEnd object { data, event }

Envelope for the “following:end” event from /v1/research.

data: object { failed, followed, iteration, 3 more }
failed: number
followed: number
iteration: number
message: string
samples: array of object { domain, title, url, 4 more }
domain: string
title: string
url: string
urlSource: "user-input" or "search-result" or "extracted-link"

URL source tracking - where a URL came from

One of the following:
"user-input"
"search-result"
"extracted-link"
relevance: optional "low" or "medium" or "high"
One of the following:
"low"
"medium"
"high"
reliability: optional "low" or "medium" or "high"
One of the following:
"low"
"medium"
"high"
summary: optional string
timestamp: number
event: "following:end"
FollowingStart object { data, event }

Envelope for the “following:start” event from /v1/research.

data: object { iteration, linkCount, message, timestamp }
iteration: number
linkCount: number
message: string
timestamp: number
event: "following:start"
IterationEnd object { data, event }

Envelope for the “iteration:end” event from /v1/research.

data: object { isLast, iteration, message, 2 more }
isLast: boolean

Whether this is the final iteration

iteration: number
message: string
timestamp: number
stopReason: optional "max_iterations" or "coverage_sufficient"

Why research iterations stopped (only present when isLast is true)

One of the following:
"max_iterations"
"coverage_sufficient"
event: "iteration:end"
IterationStart object { data, event }

Envelope for the “iteration:start” event from /v1/research.

data: object { iteration, maxIterations, message, 2 more }
iteration: number
maxIterations: number

Maximum iterations for this research mode

message: string
queries: array of string

Search queries to execute in this iteration

timestamp: number
event: "iteration:start"
JudgingEnd object { data, event }

Envelope for the “judging:end” event from /v1/research.

data: object { approved, attempt, message, 3 more }
approved: boolean
attempt: number
message: string
score: number
timestamp: number
feedback: optional string
event: "judging:end"
JudgingStart object { data, event }

Envelope for the “judging:start” event from /v1/research.

data: object { attempt, maxAttempts, message, timestamp }
attempt: number
maxAttempts: number

Maximum attempts allowed (1 + maxRevisions)

message: string
timestamp: number
event: "judging:start"
OutliningEnd object { data, event }

Envelope for the “outlining:end” event from /v1/research.

data: object { message, sourcesSelected, timestamp }
message: string
sourcesSelected: number
timestamp: number
event: "outlining:end"
OutliningStart object { data, event }

Envelope for the “outlining:start” event from /v1/research.

data: object { message, pagesAnalyzed, qualityPageCount, timestamp }
message: string
pagesAnalyzed: number

Total pages analyzed across all iterations

qualityPageCount: number

Pages that meet quality threshold (medium+ relevance and reliability)

timestamp: number
event: "outlining:start"
PlanningEnd object { data, event }

Envelope for the “planning:end” event from /v1/research.

data: object { complexity, message, objective, 4 more }
complexity: "simple" or "moderate" or "complex"
One of the following:
"simple"
"moderate"
"complex"
message: string
objective: string
plan: string
queries: array of string
questions: array of string
timestamp: number
event: "planning:end"
PlanningStart object { data, event }

Envelope for the “planning:start” event from /v1/research.

data: object { hasPrefetchedContext, message, timestamp }
hasPrefetchedContext: boolean

Whether prefetched user-provided URLs exist for context

message: string
timestamp: number
event: "planning:start"
PrefetchingEnd object { data, event }

Envelope for the “prefetching:end” event from /v1/research.

data: object { failed, fetched, message, timestamp }
failed: number
fetched: number
message: string
timestamp: number
event: "prefetching:end"
PrefetchingStart object { data, event }

Envelope for the “prefetching:start” event from /v1/research.

data: object { message, timestamp, urlCount, urls }
message: string
timestamp: number
urlCount: number
urls: array of string
event: "prefetching:start"
SearchingEnd object { data, event }

Envelope for the “searching:end” event from /v1/research.

data: object { iteration, message, timestamp, 2 more }
iteration: number
message: string
timestamp: number
urlsFound: number
urlsNew: number
event: "searching:end"
SearchingStart object { data, event }

Envelope for the “searching:start” event from /v1/research.

data: object { iteration, message, queries, timestamp }
iteration: number
message: string
queries: array of string
timestamp: number
event: "searching:start"
Start object { data, event }

Envelope for the “start” event from /v1/research.

data: object { message, timestamp }

start - Research begins

message: string
timestamp: number
event: "start"
WritingEnd object { data, event }

Envelope for the “writing:end” event from /v1/research.

data: object { attempt, message, timestamp }
attempt: number
message: string
timestamp: number
event: "writing:end"
WritingStart object { data, event }

Envelope for the “writing:start” event from /v1/research.

data: object { attempt, isRevision, maxAttempts, 3 more }
attempt: number
isRevision: boolean

Whether this is a revision attempt (attempt > 1)

maxAttempts: number

Maximum attempts allowed (1 + maxRevisions)

message: string
timestamp: number
previousScore: optional number

Previous judgment score if this is a revision

event: "writing:start"

Research

curl https://api.tabstack.ai/v1/research \
    -H 'Content-Type: application/json' \
    -H "Authorization: Bearer $TABSTACK_API_KEY" \
    -d '{
          "query": "What are the latest developments in quantum computing?",
          "fetch_timeout": 30,
          "mode": "fast"
        }'
Returns Examples