Skip to content
Get started

Research

client.agent.research(AgentResearchParams { query, fetch_timeout, mode, nocache } body, RequestOptionsoptions?): ResearchEvent | Stream<ResearchEvent>
POST/research

Execute AI-powered research queries that search the web, analyze sources, and synthesize comprehensive answers. This endpoint always streams responses using Server-Sent Events (SSE).

Streaming Response:

  • All responses are streamed using Server-Sent Events (text/event-stream)
  • Real-time progress updates as research progresses through phases

Research Modes:

  • fast - Quick answers with minimal web searches (default)
  • balanced - Standard research with multiple iterations

Use Cases:

  • Answering complex questions with cited sources
  • Synthesizing information from multiple web sources
  • Research reports on specific topics
  • Fact-checking and verification tasks
ParametersExpand Collapse
body: AgentResearchParams { query, fetch_timeout, mode, nocache }
query: string

The research query or question to answer. Maximum 10,000 characters.

maxLength10000
fetch_timeout?: number

Timeout in seconds for fetching web pages

mode?: "fast" | "balanced"

Research mode: fast (quick answers, default), balanced (standard research)

One of the following:
"fast"
"balanced"
nocache?: boolean

Skip cache and force fresh research

ReturnsExpand Collapse
ResearchEvent = V1ResearchEventAnalyzingEnd { data, event } | V1ResearchEventAnalyzingStart { data, event } | V1ResearchEventComplete { data, event } | 20 more

A Server-Sent Event from /v1/research. Typed discriminated union keyed on event.

One of the following:
V1ResearchEventAnalyzingEnd { data, event }

Envelope for the “analyzing:end” event from /v1/research.

data: Data { analyzed, failed, iteration, 3 more }
analyzed: number
failed: number
iteration: number
message: string
samples: Array<Sample>
domain: string
title: string
url: string
urlSource: "user-input" | "search-result" | "extracted-link"

URL source tracking - where a URL came from

One of the following:
"user-input"
"search-result"
"extracted-link"
relevance?: "low" | "medium" | "high"
One of the following:
"low"
"medium"
"high"
reliability?: "low" | "medium" | "high"
One of the following:
"low"
"medium"
"high"
summary?: string
timestamp: number
event: "analyzing:end"
V1ResearchEventAnalyzingStart { data, event }

Envelope for the “analyzing:start” event from /v1/research.

data: Data { iteration, message, pageCount, timestamp }
iteration: number
message: string
pageCount: number
timestamp: number
event: "analyzing:start"
V1ResearchEventComplete { data, event }

Envelope for the “complete” event from /v1/research.

data: Data { message, metadata, report, timestamp }

complete - Research finished successfully

message: string
metadata: Metadata { executedQueries, mode, prompt, 11 more }

Research metadata

Note: citedPages, gapEvaluations, outline, and judgments are optional to support fast mode, which skips these phases for maximum speed.

executedQueries: Array<Array<string>>
mode: "fast" | "balanced" | "deep" | 2 more

Research mode determines depth, thinking budget, and quality controls

Modes (in order of cost/thoroughness):

  • fast: Quick answers with minimal validation (~$2, 1 iteration, no judge)
  • balanced: Standard research with moderate depth (~$8, 3 iterations, Flash models, no judge)
  • deep: Thorough research with judge review (~$15, 5 iterations, Flash models, with judge)
  • max: Maximum quality with Pro models (~$40, 5 iterations, Pro models, with judge)
  • ultra: Ultimate tier - all Pro models, 10 iterations (expensive, for when accuracy is paramount)
One of the following:
"fast"
"balanced"
"deep"
"max"
"ultra"
prompt: string
queryComplexity: "simple" | "moderate" | "complex"
One of the following:
"simple"
"moderate"
"complex"
researchObjective: string
researchPlan: string
researchQuestions: Array<string>
totalPagesAnalyzed: number

Total pages analyzed across all iterations

citedPages?: Array<CitedPage>

Pages cited in the report, ordered by first citation appearance

id: string
claims: Array<string>
sourceQueries: Array<string>
url: string
depth?: number
fullText?: string

Full page text (fetched markdown or search excerpts). Only populated when includeFullText: true in ResearchOptions.

  • Fast mode: Parallel API excerpts (~5000 chars)
  • Other modes: Fetched page markdown
parentUrl?: string
relevance?: "low" | "medium" | "high"
One of the following:
"low"
"medium"
"high"
reliability?: "low" | "medium" | "high"
One of the following:
"low"
"medium"
"high"
summary?: string

LLM-generated summary. Undefined in fast mode (no content analysis).

title?: string
urlSource?: "user-input" | "search-result" | "extracted-link"

URL source tracking - where a URL came from

One of the following:
"user-input"
"search-result"
"extracted-link"
gapEvaluations?: Array<GapEvaluation>
gapDescription: string

Based on unanswered/partial questions, what specific information is still needed?

questionAssessments: Array<QuestionAssessment>

Assessment of each research question’s status and findings

findings: string

What we learned (if answered/partial) or what’s missing (if unanswered)

question: string

The research question being assessed

status: "answered" | "partial" | "unanswered"

Status: answered (clear info), partial (some info, gaps remain), unanswered (no relevant info)

One of the following:
"answered"
"partial"
"unanswered"
researchCoverage: "Light" | "Moderate" | "Solid" | "Comprehensive"

Research coverage level - assesses quality across all questions.

Hierarchy: Light < Moderate < Solid < Comprehensive

  • Light: Basic info on some questions, most need more depth → Continue
  • Moderate: Multiple questions answered, some remain partial → Continue
  • Solid: Most questions well-answered with validated sources → Sufficient to stop
  • Comprehensive: All questions thoroughly answered, exceptional depth → Definitely stop
One of the following:
"Light"
"Moderate"
"Solid"
"Comprehensive"
shouldContinueResearch: boolean

Explicit decision: should research continue with another iteration?

  • Considers: how many questions unanswered/partial, coverage for mode, remaining iterations
  • Drives query generation: true → generate queries, false → stop researching
newResearchQuestions?: Array<string>

New research questions to add (optional, use sparingly)

  • Only if original decomposition missed something critical
  • Maximum 2-3 new questions total across all iterations
  • Most iterations should return empty array or omit this field
searchQueries?: Array<string>

Search queries to address identified gaps (only when shouldContinueResearch is true)

  • Target unanswered questions first, then partial questions
  • 3-10 targeted queries if shouldContinueResearch is true
  • Omit or provide empty array if shouldContinueResearch is false
judgments?: Array<Judgment>
approved: boolean
observation: string
score: number
feedback?: string
metrics?: Metrics { cachedFetches, cachedSearches, fetches, 7 more }

Complete research metrics

cachedFetches: number

Cached fetch count (subset of fetches)

cachedSearches: Record<string, number>

Cached search count by provider name (subset of searches)

fetches: number

Fetch count (number of pages fetched)

iterations: number

Number of research iterations performed

phases: Record<string, Phases>

Phase timings with duration in milliseconds

duration: number
robotsBlocked: number

Number of URLs blocked by robots.txt

searches: Record<string, number>

Search count by provider name (e.g., “bright-data”, “parallel”)

successRates: SuccessRates { analyzes, fetches, searches }

Success rate metrics

analyzes: number
fetches: number
searches: number
tokens: Record<string, Tokens>

Token usage by model ID (e.g., “gemini-2.5-flash”)

input: number
output: number
totalDuration: number

Total duration in milliseconds

outline?: Outline { directAnswer, keyTakeaways, outline, relevantSourceIds }

Report outline from research writer

directAnswer: string
keyTakeaways: Array<string>
outline: string
relevantSourceIds: Array<string>
urlSources?: URLSources { extractedLinks, searchResults, userProvided }
searchResults: number
userProvided: number
report: string
timestamp: number
event: "complete"
V1ResearchEventError { data, event }

Envelope for the “error” event from /v1/research.

data: Data { error, message, timestamp, 2 more }

error - Research failed

error: Error { message, name, stack }
message: string
name: string
stack?: string
message: string
timestamp: number
activity?: "prefetching" | "planning" | "iteration" | 7 more

Activity types for research workflow

One of the following:
"prefetching"
"planning"
"iteration"
"searching"
"analyzing"
"following"
"evaluating"
"outlining"
"writing"
"judging"
iteration?: number
event: "error"
V1ResearchEventEvaluatingEnd { data, event }

Envelope for the “evaluating:end” event from /v1/research.

data: Data { coverage, gaps, iteration, 5 more }
coverage: "Light" | "Moderate" | "Solid" | "Comprehensive"
One of the following:
"Light"
"Moderate"
"Solid"
"Comprehensive"
gaps: string
iteration: number
message: string
nextQueries: Array<string>
questionAssessments: Array<QuestionAssessment>
findings: string

What we learned (if answered/partial) or what’s missing (if unanswered)

question: string

The research question being assessed

status: "answered" | "partial" | "unanswered"

Status: answered (clear info), partial (some info, gaps remain), unanswered (no relevant info)

One of the following:
"answered"
"partial"
"unanswered"
shouldContinue: boolean
timestamp: number
event: "evaluating:end"
V1ResearchEventEvaluatingStart { data, event }

Envelope for the “evaluating:start” event from /v1/research.

data: Data { iteration, message, pagesAnalyzed, 2 more }
iteration: number
message: string
pagesAnalyzed: number

Total pages analyzed so far (including this iteration)

questionCount: number

Number of research questions being assessed

timestamp: number
event: "evaluating:start"
V1ResearchEventFollowingEnd { data, event }

Envelope for the “following:end” event from /v1/research.

data: Data { failed, followed, iteration, 3 more }
failed: number
followed: number
iteration: number
message: string
samples: Array<Sample>
domain: string
title: string
url: string
urlSource: "user-input" | "search-result" | "extracted-link"

URL source tracking - where a URL came from

One of the following:
"user-input"
"search-result"
"extracted-link"
relevance?: "low" | "medium" | "high"
One of the following:
"low"
"medium"
"high"
reliability?: "low" | "medium" | "high"
One of the following:
"low"
"medium"
"high"
summary?: string
timestamp: number
event: "following:end"
V1ResearchEventFollowingStart { data, event }

Envelope for the “following:start” event from /v1/research.

data: Data { iteration, linkCount, message, timestamp }
iteration: number
linkCount: number
message: string
timestamp: number
event: "following:start"
V1ResearchEventIterationEnd { data, event }

Envelope for the “iteration:end” event from /v1/research.

data: Data { isLast, iteration, message, 2 more }
isLast: boolean

Whether this is the final iteration

iteration: number
message: string
timestamp: number
stopReason?: "max_iterations" | "coverage_sufficient"

Why research iterations stopped (only present when isLast is true)

One of the following:
"max_iterations"
"coverage_sufficient"
event: "iteration:end"
V1ResearchEventIterationStart { data, event }

Envelope for the “iteration:start” event from /v1/research.

data: Data { iteration, maxIterations, message, 2 more }
iteration: number
maxIterations: number

Maximum iterations for this research mode

message: string
queries: Array<string>

Search queries to execute in this iteration

timestamp: number
event: "iteration:start"
V1ResearchEventJudgingEnd { data, event }

Envelope for the “judging:end” event from /v1/research.

data: Data { approved, attempt, message, 3 more }
approved: boolean
attempt: number
message: string
score: number
timestamp: number
feedback?: string
event: "judging:end"
V1ResearchEventJudgingStart { data, event }

Envelope for the “judging:start” event from /v1/research.

data: Data { attempt, maxAttempts, message, timestamp }
attempt: number
maxAttempts: number

Maximum attempts allowed (1 + maxRevisions)

message: string
timestamp: number
event: "judging:start"
V1ResearchEventOutliningEnd { data, event }

Envelope for the “outlining:end” event from /v1/research.

data: Data { message, sourcesSelected, timestamp }
message: string
sourcesSelected: number
timestamp: number
event: "outlining:end"
V1ResearchEventOutliningStart { data, event }

Envelope for the “outlining:start” event from /v1/research.

data: Data { message, pagesAnalyzed, qualityPageCount, timestamp }
message: string
pagesAnalyzed: number

Total pages analyzed across all iterations

qualityPageCount: number

Pages that meet quality threshold (medium+ relevance and reliability)

timestamp: number
event: "outlining:start"
V1ResearchEventPlanningEnd { data, event }

Envelope for the “planning:end” event from /v1/research.

data: Data { complexity, message, objective, 4 more }
complexity: "simple" | "moderate" | "complex"
One of the following:
"simple"
"moderate"
"complex"
message: string
objective: string
plan: string
queries: Array<string>
questions: Array<string>
timestamp: number
event: "planning:end"
V1ResearchEventPlanningStart { data, event }

Envelope for the “planning:start” event from /v1/research.

data: Data { hasPrefetchedContext, message, timestamp }
hasPrefetchedContext: boolean

Whether prefetched user-provided URLs exist for context

message: string
timestamp: number
event: "planning:start"
V1ResearchEventPrefetchingEnd { data, event }

Envelope for the “prefetching:end” event from /v1/research.

data: Data { failed, fetched, message, timestamp }
failed: number
fetched: number
message: string
timestamp: number
event: "prefetching:end"
V1ResearchEventPrefetchingStart { data, event }

Envelope for the “prefetching:start” event from /v1/research.

data: Data { message, timestamp, urlCount, urls }
message: string
timestamp: number
urlCount: number
urls: Array<string>
event: "prefetching:start"
V1ResearchEventSearchingEnd { data, event }

Envelope for the “searching:end” event from /v1/research.

data: Data { iteration, message, timestamp, 2 more }
iteration: number
message: string
timestamp: number
urlsFound: number
urlsNew: number
event: "searching:end"
V1ResearchEventSearchingStart { data, event }

Envelope for the “searching:start” event from /v1/research.

data: Data { iteration, message, queries, timestamp }
iteration: number
message: string
queries: Array<string>
timestamp: number
event: "searching:start"
V1ResearchEventStart { data, event }

Envelope for the “start” event from /v1/research.

data: Data { message, timestamp }

start - Research begins

message: string
timestamp: number
event: "start"
V1ResearchEventWritingEnd { data, event }

Envelope for the “writing:end” event from /v1/research.

data: Data { attempt, message, timestamp }
attempt: number
message: string
timestamp: number
event: "writing:end"
V1ResearchEventWritingStart { data, event }

Envelope for the “writing:start” event from /v1/research.

data: Data { attempt, isRevision, maxAttempts, 3 more }
attempt: number
isRevision: boolean

Whether this is a revision attempt (attempt > 1)

maxAttempts: number

Maximum attempts allowed (1 + maxRevisions)

message: string
timestamp: number
previousScore?: number

Previous judgment score if this is a revision

event: "writing:start"
ResearchEvent = V1ResearchEventAnalyzingEnd { data, event } | V1ResearchEventAnalyzingStart { data, event } | V1ResearchEventComplete { data, event } | 20 more

A Server-Sent Event from /v1/research. Typed discriminated union keyed on event.

One of the following:
V1ResearchEventAnalyzingEnd { data, event }

Envelope for the “analyzing:end” event from /v1/research.

data: Data { analyzed, failed, iteration, 3 more }
analyzed: number
failed: number
iteration: number
message: string
samples: Array<Sample>
domain: string
title: string
url: string
urlSource: "user-input" | "search-result" | "extracted-link"

URL source tracking - where a URL came from

One of the following:
"user-input"
"search-result"
"extracted-link"
relevance?: "low" | "medium" | "high"
One of the following:
"low"
"medium"
"high"
reliability?: "low" | "medium" | "high"
One of the following:
"low"
"medium"
"high"
summary?: string
timestamp: number
event: "analyzing:end"
V1ResearchEventAnalyzingStart { data, event }

Envelope for the “analyzing:start” event from /v1/research.

data: Data { iteration, message, pageCount, timestamp }
iteration: number
message: string
pageCount: number
timestamp: number
event: "analyzing:start"
V1ResearchEventComplete { data, event }

Envelope for the “complete” event from /v1/research.

data: Data { message, metadata, report, timestamp }

complete - Research finished successfully

message: string
metadata: Metadata { executedQueries, mode, prompt, 11 more }

Research metadata

Note: citedPages, gapEvaluations, outline, and judgments are optional to support fast mode, which skips these phases for maximum speed.

executedQueries: Array<Array<string>>
mode: "fast" | "balanced" | "deep" | 2 more

Research mode determines depth, thinking budget, and quality controls

Modes (in order of cost/thoroughness):

  • fast: Quick answers with minimal validation (~$2, 1 iteration, no judge)
  • balanced: Standard research with moderate depth (~$8, 3 iterations, Flash models, no judge)
  • deep: Thorough research with judge review (~$15, 5 iterations, Flash models, with judge)
  • max: Maximum quality with Pro models (~$40, 5 iterations, Pro models, with judge)
  • ultra: Ultimate tier - all Pro models, 10 iterations (expensive, for when accuracy is paramount)
One of the following:
"fast"
"balanced"
"deep"
"max"
"ultra"
prompt: string
queryComplexity: "simple" | "moderate" | "complex"
One of the following:
"simple"
"moderate"
"complex"
researchObjective: string
researchPlan: string
researchQuestions: Array<string>
totalPagesAnalyzed: number

Total pages analyzed across all iterations

citedPages?: Array<CitedPage>

Pages cited in the report, ordered by first citation appearance

id: string
claims: Array<string>
sourceQueries: Array<string>
url: string
depth?: number
fullText?: string

Full page text (fetched markdown or search excerpts). Only populated when includeFullText: true in ResearchOptions.

  • Fast mode: Parallel API excerpts (~5000 chars)
  • Other modes: Fetched page markdown
parentUrl?: string
relevance?: "low" | "medium" | "high"
One of the following:
"low"
"medium"
"high"
reliability?: "low" | "medium" | "high"
One of the following:
"low"
"medium"
"high"
summary?: string

LLM-generated summary. Undefined in fast mode (no content analysis).

title?: string
urlSource?: "user-input" | "search-result" | "extracted-link"

URL source tracking - where a URL came from

One of the following:
"user-input"
"search-result"
"extracted-link"
gapEvaluations?: Array<GapEvaluation>
gapDescription: string

Based on unanswered/partial questions, what specific information is still needed?

questionAssessments: Array<QuestionAssessment>

Assessment of each research question’s status and findings

findings: string

What we learned (if answered/partial) or what’s missing (if unanswered)

question: string

The research question being assessed

status: "answered" | "partial" | "unanswered"

Status: answered (clear info), partial (some info, gaps remain), unanswered (no relevant info)

One of the following:
"answered"
"partial"
"unanswered"
researchCoverage: "Light" | "Moderate" | "Solid" | "Comprehensive"

Research coverage level - assesses quality across all questions.

Hierarchy: Light < Moderate < Solid < Comprehensive

  • Light: Basic info on some questions, most need more depth → Continue
  • Moderate: Multiple questions answered, some remain partial → Continue
  • Solid: Most questions well-answered with validated sources → Sufficient to stop
  • Comprehensive: All questions thoroughly answered, exceptional depth → Definitely stop
One of the following:
"Light"
"Moderate"
"Solid"
"Comprehensive"
shouldContinueResearch: boolean

Explicit decision: should research continue with another iteration?

  • Considers: how many questions unanswered/partial, coverage for mode, remaining iterations
  • Drives query generation: true → generate queries, false → stop researching
newResearchQuestions?: Array<string>

New research questions to add (optional, use sparingly)

  • Only if original decomposition missed something critical
  • Maximum 2-3 new questions total across all iterations
  • Most iterations should return empty array or omit this field
searchQueries?: Array<string>

Search queries to address identified gaps (only when shouldContinueResearch is true)

  • Target unanswered questions first, then partial questions
  • 3-10 targeted queries if shouldContinueResearch is true
  • Omit or provide empty array if shouldContinueResearch is false
judgments?: Array<Judgment>
approved: boolean
observation: string
score: number
feedback?: string
metrics?: Metrics { cachedFetches, cachedSearches, fetches, 7 more }

Complete research metrics

cachedFetches: number

Cached fetch count (subset of fetches)

cachedSearches: Record<string, number>

Cached search count by provider name (subset of searches)

fetches: number

Fetch count (number of pages fetched)

iterations: number

Number of research iterations performed

phases: Record<string, Phases>

Phase timings with duration in milliseconds

duration: number
robotsBlocked: number

Number of URLs blocked by robots.txt

searches: Record<string, number>

Search count by provider name (e.g., “bright-data”, “parallel”)

successRates: SuccessRates { analyzes, fetches, searches }

Success rate metrics

analyzes: number
fetches: number
searches: number
tokens: Record<string, Tokens>

Token usage by model ID (e.g., “gemini-2.5-flash”)

input: number
output: number
totalDuration: number

Total duration in milliseconds

outline?: Outline { directAnswer, keyTakeaways, outline, relevantSourceIds }

Report outline from research writer

directAnswer: string
keyTakeaways: Array<string>
outline: string
relevantSourceIds: Array<string>
urlSources?: URLSources { extractedLinks, searchResults, userProvided }
searchResults: number
userProvided: number
report: string
timestamp: number
event: "complete"
V1ResearchEventError { data, event }

Envelope for the “error” event from /v1/research.

data: Data { error, message, timestamp, 2 more }

error - Research failed

error: Error { message, name, stack }
message: string
name: string
stack?: string
message: string
timestamp: number
activity?: "prefetching" | "planning" | "iteration" | 7 more

Activity types for research workflow

One of the following:
"prefetching"
"planning"
"iteration"
"searching"
"analyzing"
"following"
"evaluating"
"outlining"
"writing"
"judging"
iteration?: number
event: "error"
V1ResearchEventEvaluatingEnd { data, event }

Envelope for the “evaluating:end” event from /v1/research.

data: Data { coverage, gaps, iteration, 5 more }
coverage: "Light" | "Moderate" | "Solid" | "Comprehensive"
One of the following:
"Light"
"Moderate"
"Solid"
"Comprehensive"
gaps: string
iteration: number
message: string
nextQueries: Array<string>
questionAssessments: Array<QuestionAssessment>
findings: string

What we learned (if answered/partial) or what’s missing (if unanswered)

question: string

The research question being assessed

status: "answered" | "partial" | "unanswered"

Status: answered (clear info), partial (some info, gaps remain), unanswered (no relevant info)

One of the following:
"answered"
"partial"
"unanswered"
shouldContinue: boolean
timestamp: number
event: "evaluating:end"
V1ResearchEventEvaluatingStart { data, event }

Envelope for the “evaluating:start” event from /v1/research.

data: Data { iteration, message, pagesAnalyzed, 2 more }
iteration: number
message: string
pagesAnalyzed: number

Total pages analyzed so far (including this iteration)

questionCount: number

Number of research questions being assessed

timestamp: number
event: "evaluating:start"
V1ResearchEventFollowingEnd { data, event }

Envelope for the “following:end” event from /v1/research.

data: Data { failed, followed, iteration, 3 more }
failed: number
followed: number
iteration: number
message: string
samples: Array<Sample>
domain: string
title: string
url: string
urlSource: "user-input" | "search-result" | "extracted-link"

URL source tracking - where a URL came from

One of the following:
"user-input"
"search-result"
"extracted-link"
relevance?: "low" | "medium" | "high"
One of the following:
"low"
"medium"
"high"
reliability?: "low" | "medium" | "high"
One of the following:
"low"
"medium"
"high"
summary?: string
timestamp: number
event: "following:end"
V1ResearchEventFollowingStart { data, event }

Envelope for the “following:start” event from /v1/research.

data: Data { iteration, linkCount, message, timestamp }
iteration: number
linkCount: number
message: string
timestamp: number
event: "following:start"
V1ResearchEventIterationEnd { data, event }

Envelope for the “iteration:end” event from /v1/research.

data: Data { isLast, iteration, message, 2 more }
isLast: boolean

Whether this is the final iteration

iteration: number
message: string
timestamp: number
stopReason?: "max_iterations" | "coverage_sufficient"

Why research iterations stopped (only present when isLast is true)

One of the following:
"max_iterations"
"coverage_sufficient"
event: "iteration:end"
V1ResearchEventIterationStart { data, event }

Envelope for the “iteration:start” event from /v1/research.

data: Data { iteration, maxIterations, message, 2 more }
iteration: number
maxIterations: number

Maximum iterations for this research mode

message: string
queries: Array<string>

Search queries to execute in this iteration

timestamp: number
event: "iteration:start"
V1ResearchEventJudgingEnd { data, event }

Envelope for the “judging:end” event from /v1/research.

data: Data { approved, attempt, message, 3 more }
approved: boolean
attempt: number
message: string
score: number
timestamp: number
feedback?: string
event: "judging:end"
V1ResearchEventJudgingStart { data, event }

Envelope for the “judging:start” event from /v1/research.

data: Data { attempt, maxAttempts, message, timestamp }
attempt: number
maxAttempts: number

Maximum attempts allowed (1 + maxRevisions)

message: string
timestamp: number
event: "judging:start"
V1ResearchEventOutliningEnd { data, event }

Envelope for the “outlining:end” event from /v1/research.

data: Data { message, sourcesSelected, timestamp }
message: string
sourcesSelected: number
timestamp: number
event: "outlining:end"
V1ResearchEventOutliningStart { data, event }

Envelope for the “outlining:start” event from /v1/research.

data: Data { message, pagesAnalyzed, qualityPageCount, timestamp }
message: string
pagesAnalyzed: number

Total pages analyzed across all iterations

qualityPageCount: number

Pages that meet quality threshold (medium+ relevance and reliability)

timestamp: number
event: "outlining:start"
V1ResearchEventPlanningEnd { data, event }

Envelope for the “planning:end” event from /v1/research.

data: Data { complexity, message, objective, 4 more }
complexity: "simple" | "moderate" | "complex"
One of the following:
"simple"
"moderate"
"complex"
message: string
objective: string
plan: string
queries: Array<string>
questions: Array<string>
timestamp: number
event: "planning:end"
V1ResearchEventPlanningStart { data, event }

Envelope for the “planning:start” event from /v1/research.

data: Data { hasPrefetchedContext, message, timestamp }
hasPrefetchedContext: boolean

Whether prefetched user-provided URLs exist for context

message: string
timestamp: number
event: "planning:start"
V1ResearchEventPrefetchingEnd { data, event }

Envelope for the “prefetching:end” event from /v1/research.

data: Data { failed, fetched, message, timestamp }
failed: number
fetched: number
message: string
timestamp: number
event: "prefetching:end"
V1ResearchEventPrefetchingStart { data, event }

Envelope for the “prefetching:start” event from /v1/research.

data: Data { message, timestamp, urlCount, urls }
message: string
timestamp: number
urlCount: number
urls: Array<string>
event: "prefetching:start"
V1ResearchEventSearchingEnd { data, event }

Envelope for the “searching:end” event from /v1/research.

data: Data { iteration, message, timestamp, 2 more }
iteration: number
message: string
timestamp: number
urlsFound: number
urlsNew: number
event: "searching:end"
V1ResearchEventSearchingStart { data, event }

Envelope for the “searching:start” event from /v1/research.

data: Data { iteration, message, queries, timestamp }
iteration: number
message: string
queries: Array<string>
timestamp: number
event: "searching:start"
V1ResearchEventStart { data, event }

Envelope for the “start” event from /v1/research.

data: Data { message, timestamp }

start - Research begins

message: string
timestamp: number
event: "start"
V1ResearchEventWritingEnd { data, event }

Envelope for the “writing:end” event from /v1/research.

data: Data { attempt, message, timestamp }
attempt: number
message: string
timestamp: number
event: "writing:end"
V1ResearchEventWritingStart { data, event }

Envelope for the “writing:start” event from /v1/research.

data: Data { attempt, isRevision, maxAttempts, 3 more }
attempt: number
isRevision: boolean

Whether this is a revision attempt (attempt > 1)

maxAttempts: number

Maximum attempts allowed (1 + maxRevisions)

message: string
timestamp: number
previousScore?: number

Previous judgment score if this is a revision

event: "writing:start"

Research

import Tabstack from '@tabstack/sdk';

const client = new Tabstack({
  apiKey: process.env['TABSTACK_API_KEY'], // This is the default and can be omitted
});

const researchEvent = await client.agent.research({
  query: 'What are the latest developments in quantum computing?',
});

console.log(researchEvent);
Returns Examples