Skip to content
Get started

Research

agent.research(AgentResearchParams**kwargs) -> ResearchEvent
POST/research

Execute AI-powered research queries that search the web, analyze sources, and synthesize comprehensive answers. This endpoint always streams responses using Server-Sent Events (SSE).

Streaming Response:

  • All responses are streamed using Server-Sent Events (text/event-stream)
  • Real-time progress updates as research progresses through phases

Research Modes:

  • fast - Quick answers with minimal web searches (default)
  • balanced - Standard research with multiple iterations

Use Cases:

  • Answering complex questions with cited sources
  • Synthesizing information from multiple web sources
  • Research reports on specific topics
  • Fact-checking and verification tasks
ParametersExpand Collapse
query: str

The research query or question to answer. Maximum 10,000 characters.

maxLength10000
fetch_timeout: Optional[int]

Timeout in seconds for fetching web pages

mode: Optional[Literal["fast", "balanced"]]

Research mode: fast (quick answers, default), balanced (standard research)

One of the following:
"fast"
"balanced"
nocache: Optional[bool]

Skip cache and force fresh research

ReturnsExpand Collapse

A Server-Sent Event from /v1/research. Typed discriminated union keyed on event.

One of the following:
class V1ResearchEventAnalyzingEnd:

Envelope for the “analyzing:end” event from /v1/research.

data: V1ResearchEventAnalyzingEndData
analyzed: float
failed: float
iteration: float
message: str
samples: List[V1ResearchEventAnalyzingEndDataSample]
domain: str
title: str
url: str
url_source: Literal["user-input", "search-result", "extracted-link"]

URL source tracking - where a URL came from

One of the following:
"user-input"
"search-result"
"extracted-link"
relevance: Optional[Literal["low", "medium", "high"]]
One of the following:
"low"
"medium"
"high"
reliability: Optional[Literal["low", "medium", "high"]]
One of the following:
"low"
"medium"
"high"
summary: Optional[str]
timestamp: float
event: Literal["analyzing:end"]
class V1ResearchEventAnalyzingStart:

Envelope for the “analyzing:start” event from /v1/research.

data: V1ResearchEventAnalyzingStartData
iteration: float
message: str
page_count: float
timestamp: float
event: Literal["analyzing:start"]
class V1ResearchEventComplete:

Envelope for the “complete” event from /v1/research.

data: V1ResearchEventCompleteData

complete - Research finished successfully

message: str
metadata: V1ResearchEventCompleteDataMetadata

Research metadata

Note: citedPages, gapEvaluations, outline, and judgments are optional to support fast mode, which skips these phases for maximum speed.

executed_queries: List[List[str]]
mode: Literal["fast", "balanced", "deep", 2 more]

Research mode determines depth, thinking budget, and quality controls

Modes (in order of cost/thoroughness):

  • fast: Quick answers with minimal validation (~$2, 1 iteration, no judge)
  • balanced: Standard research with moderate depth (~$8, 3 iterations, Flash models, no judge)
  • deep: Thorough research with judge review (~$15, 5 iterations, Flash models, with judge)
  • max: Maximum quality with Pro models (~$40, 5 iterations, Pro models, with judge)
  • ultra: Ultimate tier - all Pro models, 10 iterations (expensive, for when accuracy is paramount)
One of the following:
"fast"
"balanced"
"deep"
"max"
"ultra"
prompt: str
query_complexity: Literal["simple", "moderate", "complex"]
One of the following:
"simple"
"moderate"
"complex"
research_objective: str
research_plan: str
research_questions: List[str]
total_pages_analyzed: float

Total pages analyzed across all iterations

cited_pages: Optional[List[V1ResearchEventCompleteDataMetadataCitedPage]]

Pages cited in the report, ordered by first citation appearance

id: str
claims: List[str]
source_queries: List[str]
url: str
depth: Optional[float]
full_text: Optional[str]

Full page text (fetched markdown or search excerpts). Only populated when includeFullText: true in ResearchOptions.

  • Fast mode: Parallel API excerpts (~5000 chars)
  • Other modes: Fetched page markdown
parent_url: Optional[str]
relevance: Optional[Literal["low", "medium", "high"]]
One of the following:
"low"
"medium"
"high"
reliability: Optional[Literal["low", "medium", "high"]]
One of the following:
"low"
"medium"
"high"
summary: Optional[str]

LLM-generated summary. Undefined in fast mode (no content analysis).

title: Optional[str]
url_source: Optional[Literal["user-input", "search-result", "extracted-link"]]

URL source tracking - where a URL came from

One of the following:
"user-input"
"search-result"
"extracted-link"
gap_evaluations: Optional[List[V1ResearchEventCompleteDataMetadataGapEvaluation]]
gap_description: str

Based on unanswered/partial questions, what specific information is still needed?

question_assessments: List[V1ResearchEventCompleteDataMetadataGapEvaluationQuestionAssessment]

Assessment of each research question’s status and findings

findings: str

What we learned (if answered/partial) or what’s missing (if unanswered)

question: str

The research question being assessed

status: Literal["answered", "partial", "unanswered"]

Status: answered (clear info), partial (some info, gaps remain), unanswered (no relevant info)

One of the following:
"answered"
"partial"
"unanswered"
research_coverage: Literal["Light", "Moderate", "Solid", "Comprehensive"]

Research coverage level - assesses quality across all questions.

Hierarchy: Light < Moderate < Solid < Comprehensive

  • Light: Basic info on some questions, most need more depth → Continue
  • Moderate: Multiple questions answered, some remain partial → Continue
  • Solid: Most questions well-answered with validated sources → Sufficient to stop
  • Comprehensive: All questions thoroughly answered, exceptional depth → Definitely stop
One of the following:
"Light"
"Moderate"
"Solid"
"Comprehensive"
should_continue_research: bool

Explicit decision: should research continue with another iteration?

  • Considers: how many questions unanswered/partial, coverage for mode, remaining iterations
  • Drives query generation: true → generate queries, false → stop researching
new_research_questions: Optional[List[str]]

New research questions to add (optional, use sparingly)

  • Only if original decomposition missed something critical
  • Maximum 2-3 new questions total across all iterations
  • Most iterations should return empty array or omit this field
search_queries: Optional[List[str]]

Search queries to address identified gaps (only when shouldContinueResearch is true)

  • Target unanswered questions first, then partial questions
  • 3-10 targeted queries if shouldContinueResearch is true
  • Omit or provide empty array if shouldContinueResearch is false
judgments: Optional[List[V1ResearchEventCompleteDataMetadataJudgment]]
approved: bool
observation: str
score: float
feedback: Optional[str]
metrics: Optional[V1ResearchEventCompleteDataMetadataMetrics]

Complete research metrics

cached_fetches: float

Cached fetch count (subset of fetches)

cached_searches: Dict[str, float]

Cached search count by provider name (subset of searches)

fetches: float

Fetch count (number of pages fetched)

iterations: float

Number of research iterations performed

phases: Dict[str, V1ResearchEventCompleteDataMetadataMetricsPhases]

Phase timings with duration in milliseconds

duration: float
robots_blocked: float

Number of URLs blocked by robots.txt

searches: Dict[str, float]

Search count by provider name (e.g., “bright-data”, “parallel”)

success_rates: V1ResearchEventCompleteDataMetadataMetricsSuccessRates

Success rate metrics

analyzes: float
fetches: float
searches: float
tokens: Dict[str, V1ResearchEventCompleteDataMetadataMetricsTokens]

Token usage by model ID (e.g., “gemini-2.5-flash”)

input: float
output: float
total_duration: float

Total duration in milliseconds

outline: Optional[V1ResearchEventCompleteDataMetadataOutline]

Report outline from research writer

direct_answer: str
key_takeaways: List[str]
outline: str
relevant_source_ids: List[str]
url_sources: Optional[V1ResearchEventCompleteDataMetadataURLSources]
search_results: float
user_provided: float
report: str
timestamp: float
event: Literal["complete"]
class V1ResearchEventError:

Envelope for the “error” event from /v1/research.

data: V1ResearchEventErrorData

error - Research failed

error: V1ResearchEventErrorDataError
message: str
name: str
stack: Optional[str]
message: str
timestamp: float
activity: Optional[Literal["prefetching", "planning", "iteration", 7 more]]

Activity types for research workflow

One of the following:
"prefetching"
"planning"
"iteration"
"searching"
"analyzing"
"following"
"evaluating"
"outlining"
"writing"
"judging"
iteration: Optional[float]
event: Literal["error"]
class V1ResearchEventEvaluatingEnd:

Envelope for the “evaluating:end” event from /v1/research.

data: V1ResearchEventEvaluatingEndData
coverage: Literal["Light", "Moderate", "Solid", "Comprehensive"]
One of the following:
"Light"
"Moderate"
"Solid"
"Comprehensive"
gaps: str
iteration: float
message: str
next_queries: List[str]
question_assessments: List[V1ResearchEventEvaluatingEndDataQuestionAssessment]
findings: str

What we learned (if answered/partial) or what’s missing (if unanswered)

question: str

The research question being assessed

status: Literal["answered", "partial", "unanswered"]

Status: answered (clear info), partial (some info, gaps remain), unanswered (no relevant info)

One of the following:
"answered"
"partial"
"unanswered"
should_continue: bool
timestamp: float
event: Literal["evaluating:end"]
class V1ResearchEventEvaluatingStart:

Envelope for the “evaluating:start” event from /v1/research.

data: V1ResearchEventEvaluatingStartData
iteration: float
message: str
pages_analyzed: float

Total pages analyzed so far (including this iteration)

question_count: float

Number of research questions being assessed

timestamp: float
event: Literal["evaluating:start"]
class V1ResearchEventFollowingEnd:

Envelope for the “following:end” event from /v1/research.

data: V1ResearchEventFollowingEndData
failed: float
followed: float
iteration: float
message: str
samples: List[V1ResearchEventFollowingEndDataSample]
domain: str
title: str
url: str
url_source: Literal["user-input", "search-result", "extracted-link"]

URL source tracking - where a URL came from

One of the following:
"user-input"
"search-result"
"extracted-link"
relevance: Optional[Literal["low", "medium", "high"]]
One of the following:
"low"
"medium"
"high"
reliability: Optional[Literal["low", "medium", "high"]]
One of the following:
"low"
"medium"
"high"
summary: Optional[str]
timestamp: float
event: Literal["following:end"]
class V1ResearchEventFollowingStart:

Envelope for the “following:start” event from /v1/research.

data: V1ResearchEventFollowingStartData
iteration: float
link_count: float
message: str
timestamp: float
event: Literal["following:start"]
class V1ResearchEventIterationEnd:

Envelope for the “iteration:end” event from /v1/research.

data: V1ResearchEventIterationEndData
is_last: bool

Whether this is the final iteration

iteration: float
message: str
timestamp: float
stop_reason: Optional[Literal["max_iterations", "coverage_sufficient"]]

Why research iterations stopped (only present when isLast is true)

One of the following:
"max_iterations"
"coverage_sufficient"
event: Literal["iteration:end"]
class V1ResearchEventIterationStart:

Envelope for the “iteration:start” event from /v1/research.

data: V1ResearchEventIterationStartData
iteration: float
max_iterations: float

Maximum iterations for this research mode

message: str
queries: List[str]

Search queries to execute in this iteration

timestamp: float
event: Literal["iteration:start"]
class V1ResearchEventJudgingEnd:

Envelope for the “judging:end” event from /v1/research.

data: V1ResearchEventJudgingEndData
approved: bool
attempt: float
message: str
score: float
timestamp: float
feedback: Optional[str]
event: Literal["judging:end"]
class V1ResearchEventJudgingStart:

Envelope for the “judging:start” event from /v1/research.

data: V1ResearchEventJudgingStartData
attempt: float
max_attempts: float

Maximum attempts allowed (1 + maxRevisions)

message: str
timestamp: float
event: Literal["judging:start"]
class V1ResearchEventOutliningEnd:

Envelope for the “outlining:end” event from /v1/research.

data: V1ResearchEventOutliningEndData
message: str
sources_selected: float
timestamp: float
event: Literal["outlining:end"]
class V1ResearchEventOutliningStart:

Envelope for the “outlining:start” event from /v1/research.

data: V1ResearchEventOutliningStartData
message: str
pages_analyzed: float

Total pages analyzed across all iterations

quality_page_count: float

Pages that meet quality threshold (medium+ relevance and reliability)

timestamp: float
event: Literal["outlining:start"]
class V1ResearchEventPlanningEnd:

Envelope for the “planning:end” event from /v1/research.

data: V1ResearchEventPlanningEndData
complexity: Literal["simple", "moderate", "complex"]
One of the following:
"simple"
"moderate"
"complex"
message: str
objective: str
plan: str
queries: List[str]
questions: List[str]
timestamp: float
event: Literal["planning:end"]
class V1ResearchEventPlanningStart:

Envelope for the “planning:start” event from /v1/research.

data: V1ResearchEventPlanningStartData
has_prefetched_context: bool

Whether prefetched user-provided URLs exist for context

message: str
timestamp: float
event: Literal["planning:start"]
class V1ResearchEventPrefetchingEnd:

Envelope for the “prefetching:end” event from /v1/research.

data: V1ResearchEventPrefetchingEndData
failed: float
fetched: float
message: str
timestamp: float
event: Literal["prefetching:end"]
class V1ResearchEventPrefetchingStart:

Envelope for the “prefetching:start” event from /v1/research.

data: V1ResearchEventPrefetchingStartData
message: str
timestamp: float
url_count: float
urls: List[str]
event: Literal["prefetching:start"]
class V1ResearchEventSearchingEnd:

Envelope for the “searching:end” event from /v1/research.

data: V1ResearchEventSearchingEndData
iteration: float
message: str
timestamp: float
urls_found: float
urls_new: float
event: Literal["searching:end"]
class V1ResearchEventSearchingStart:

Envelope for the “searching:start” event from /v1/research.

data: V1ResearchEventSearchingStartData
iteration: float
message: str
queries: List[str]
timestamp: float
event: Literal["searching:start"]
class V1ResearchEventStart:

Envelope for the “start” event from /v1/research.

data: V1ResearchEventStartData

start - Research begins

message: str
timestamp: float
event: Literal["start"]
class V1ResearchEventWritingEnd:

Envelope for the “writing:end” event from /v1/research.

data: V1ResearchEventWritingEndData
attempt: float
message: str
timestamp: float
event: Literal["writing:end"]
class V1ResearchEventWritingStart:

Envelope for the “writing:start” event from /v1/research.

data: V1ResearchEventWritingStartData
attempt: float
is_revision: bool

Whether this is a revision attempt (attempt > 1)

max_attempts: float

Maximum attempts allowed (1 + maxRevisions)

message: str
timestamp: float
previous_score: Optional[float]

Previous judgment score if this is a revision

event: Literal["writing:start"]

A Server-Sent Event from /v1/research. Typed discriminated union keyed on event.

One of the following:
class V1ResearchEventAnalyzingEnd:

Envelope for the “analyzing:end” event from /v1/research.

data: V1ResearchEventAnalyzingEndData
analyzed: float
failed: float
iteration: float
message: str
samples: List[V1ResearchEventAnalyzingEndDataSample]
domain: str
title: str
url: str
url_source: Literal["user-input", "search-result", "extracted-link"]

URL source tracking - where a URL came from

One of the following:
"user-input"
"search-result"
"extracted-link"
relevance: Optional[Literal["low", "medium", "high"]]
One of the following:
"low"
"medium"
"high"
reliability: Optional[Literal["low", "medium", "high"]]
One of the following:
"low"
"medium"
"high"
summary: Optional[str]
timestamp: float
event: Literal["analyzing:end"]
class V1ResearchEventAnalyzingStart:

Envelope for the “analyzing:start” event from /v1/research.

data: V1ResearchEventAnalyzingStartData
iteration: float
message: str
page_count: float
timestamp: float
event: Literal["analyzing:start"]
class V1ResearchEventComplete:

Envelope for the “complete” event from /v1/research.

data: V1ResearchEventCompleteData

complete - Research finished successfully

message: str
metadata: V1ResearchEventCompleteDataMetadata

Research metadata

Note: citedPages, gapEvaluations, outline, and judgments are optional to support fast mode, which skips these phases for maximum speed.

executed_queries: List[List[str]]
mode: Literal["fast", "balanced", "deep", 2 more]

Research mode determines depth, thinking budget, and quality controls

Modes (in order of cost/thoroughness):

  • fast: Quick answers with minimal validation (~$2, 1 iteration, no judge)
  • balanced: Standard research with moderate depth (~$8, 3 iterations, Flash models, no judge)
  • deep: Thorough research with judge review (~$15, 5 iterations, Flash models, with judge)
  • max: Maximum quality with Pro models (~$40, 5 iterations, Pro models, with judge)
  • ultra: Ultimate tier - all Pro models, 10 iterations (expensive, for when accuracy is paramount)
One of the following:
"fast"
"balanced"
"deep"
"max"
"ultra"
prompt: str
query_complexity: Literal["simple", "moderate", "complex"]
One of the following:
"simple"
"moderate"
"complex"
research_objective: str
research_plan: str
research_questions: List[str]
total_pages_analyzed: float

Total pages analyzed across all iterations

cited_pages: Optional[List[V1ResearchEventCompleteDataMetadataCitedPage]]

Pages cited in the report, ordered by first citation appearance

id: str
claims: List[str]
source_queries: List[str]
url: str
depth: Optional[float]
full_text: Optional[str]

Full page text (fetched markdown or search excerpts). Only populated when includeFullText: true in ResearchOptions.

  • Fast mode: Parallel API excerpts (~5000 chars)
  • Other modes: Fetched page markdown
parent_url: Optional[str]
relevance: Optional[Literal["low", "medium", "high"]]
One of the following:
"low"
"medium"
"high"
reliability: Optional[Literal["low", "medium", "high"]]
One of the following:
"low"
"medium"
"high"
summary: Optional[str]

LLM-generated summary. Undefined in fast mode (no content analysis).

title: Optional[str]
url_source: Optional[Literal["user-input", "search-result", "extracted-link"]]

URL source tracking - where a URL came from

One of the following:
"user-input"
"search-result"
"extracted-link"
gap_evaluations: Optional[List[V1ResearchEventCompleteDataMetadataGapEvaluation]]
gap_description: str

Based on unanswered/partial questions, what specific information is still needed?

question_assessments: List[V1ResearchEventCompleteDataMetadataGapEvaluationQuestionAssessment]

Assessment of each research question’s status and findings

findings: str

What we learned (if answered/partial) or what’s missing (if unanswered)

question: str

The research question being assessed

status: Literal["answered", "partial", "unanswered"]

Status: answered (clear info), partial (some info, gaps remain), unanswered (no relevant info)

One of the following:
"answered"
"partial"
"unanswered"
research_coverage: Literal["Light", "Moderate", "Solid", "Comprehensive"]

Research coverage level - assesses quality across all questions.

Hierarchy: Light < Moderate < Solid < Comprehensive

  • Light: Basic info on some questions, most need more depth → Continue
  • Moderate: Multiple questions answered, some remain partial → Continue
  • Solid: Most questions well-answered with validated sources → Sufficient to stop
  • Comprehensive: All questions thoroughly answered, exceptional depth → Definitely stop
One of the following:
"Light"
"Moderate"
"Solid"
"Comprehensive"
should_continue_research: bool

Explicit decision: should research continue with another iteration?

  • Considers: how many questions unanswered/partial, coverage for mode, remaining iterations
  • Drives query generation: true → generate queries, false → stop researching
new_research_questions: Optional[List[str]]

New research questions to add (optional, use sparingly)

  • Only if original decomposition missed something critical
  • Maximum 2-3 new questions total across all iterations
  • Most iterations should return empty array or omit this field
search_queries: Optional[List[str]]

Search queries to address identified gaps (only when shouldContinueResearch is true)

  • Target unanswered questions first, then partial questions
  • 3-10 targeted queries if shouldContinueResearch is true
  • Omit or provide empty array if shouldContinueResearch is false
judgments: Optional[List[V1ResearchEventCompleteDataMetadataJudgment]]
approved: bool
observation: str
score: float
feedback: Optional[str]
metrics: Optional[V1ResearchEventCompleteDataMetadataMetrics]

Complete research metrics

cached_fetches: float

Cached fetch count (subset of fetches)

cached_searches: Dict[str, float]

Cached search count by provider name (subset of searches)

fetches: float

Fetch count (number of pages fetched)

iterations: float

Number of research iterations performed

phases: Dict[str, V1ResearchEventCompleteDataMetadataMetricsPhases]

Phase timings with duration in milliseconds

duration: float
robots_blocked: float

Number of URLs blocked by robots.txt

searches: Dict[str, float]

Search count by provider name (e.g., “bright-data”, “parallel”)

success_rates: V1ResearchEventCompleteDataMetadataMetricsSuccessRates

Success rate metrics

analyzes: float
fetches: float
searches: float
tokens: Dict[str, V1ResearchEventCompleteDataMetadataMetricsTokens]

Token usage by model ID (e.g., “gemini-2.5-flash”)

input: float
output: float
total_duration: float

Total duration in milliseconds

outline: Optional[V1ResearchEventCompleteDataMetadataOutline]

Report outline from research writer

direct_answer: str
key_takeaways: List[str]
outline: str
relevant_source_ids: List[str]
url_sources: Optional[V1ResearchEventCompleteDataMetadataURLSources]
search_results: float
user_provided: float
report: str
timestamp: float
event: Literal["complete"]
class V1ResearchEventError:

Envelope for the “error” event from /v1/research.

data: V1ResearchEventErrorData

error - Research failed

error: V1ResearchEventErrorDataError
message: str
name: str
stack: Optional[str]
message: str
timestamp: float
activity: Optional[Literal["prefetching", "planning", "iteration", 7 more]]

Activity types for research workflow

One of the following:
"prefetching"
"planning"
"iteration"
"searching"
"analyzing"
"following"
"evaluating"
"outlining"
"writing"
"judging"
iteration: Optional[float]
event: Literal["error"]
class V1ResearchEventEvaluatingEnd:

Envelope for the “evaluating:end” event from /v1/research.

data: V1ResearchEventEvaluatingEndData
coverage: Literal["Light", "Moderate", "Solid", "Comprehensive"]
One of the following:
"Light"
"Moderate"
"Solid"
"Comprehensive"
gaps: str
iteration: float
message: str
next_queries: List[str]
question_assessments: List[V1ResearchEventEvaluatingEndDataQuestionAssessment]
findings: str

What we learned (if answered/partial) or what’s missing (if unanswered)

question: str

The research question being assessed

status: Literal["answered", "partial", "unanswered"]

Status: answered (clear info), partial (some info, gaps remain), unanswered (no relevant info)

One of the following:
"answered"
"partial"
"unanswered"
should_continue: bool
timestamp: float
event: Literal["evaluating:end"]
class V1ResearchEventEvaluatingStart:

Envelope for the “evaluating:start” event from /v1/research.

data: V1ResearchEventEvaluatingStartData
iteration: float
message: str
pages_analyzed: float

Total pages analyzed so far (including this iteration)

question_count: float

Number of research questions being assessed

timestamp: float
event: Literal["evaluating:start"]
class V1ResearchEventFollowingEnd:

Envelope for the “following:end” event from /v1/research.

data: V1ResearchEventFollowingEndData
failed: float
followed: float
iteration: float
message: str
samples: List[V1ResearchEventFollowingEndDataSample]
domain: str
title: str
url: str
url_source: Literal["user-input", "search-result", "extracted-link"]

URL source tracking - where a URL came from

One of the following:
"user-input"
"search-result"
"extracted-link"
relevance: Optional[Literal["low", "medium", "high"]]
One of the following:
"low"
"medium"
"high"
reliability: Optional[Literal["low", "medium", "high"]]
One of the following:
"low"
"medium"
"high"
summary: Optional[str]
timestamp: float
event: Literal["following:end"]
class V1ResearchEventFollowingStart:

Envelope for the “following:start” event from /v1/research.

data: V1ResearchEventFollowingStartData
iteration: float
link_count: float
message: str
timestamp: float
event: Literal["following:start"]
class V1ResearchEventIterationEnd:

Envelope for the “iteration:end” event from /v1/research.

data: V1ResearchEventIterationEndData
is_last: bool

Whether this is the final iteration

iteration: float
message: str
timestamp: float
stop_reason: Optional[Literal["max_iterations", "coverage_sufficient"]]

Why research iterations stopped (only present when isLast is true)

One of the following:
"max_iterations"
"coverage_sufficient"
event: Literal["iteration:end"]
class V1ResearchEventIterationStart:

Envelope for the “iteration:start” event from /v1/research.

data: V1ResearchEventIterationStartData
iteration: float
max_iterations: float

Maximum iterations for this research mode

message: str
queries: List[str]

Search queries to execute in this iteration

timestamp: float
event: Literal["iteration:start"]
class V1ResearchEventJudgingEnd:

Envelope for the “judging:end” event from /v1/research.

data: V1ResearchEventJudgingEndData
approved: bool
attempt: float
message: str
score: float
timestamp: float
feedback: Optional[str]
event: Literal["judging:end"]
class V1ResearchEventJudgingStart:

Envelope for the “judging:start” event from /v1/research.

data: V1ResearchEventJudgingStartData
attempt: float
max_attempts: float

Maximum attempts allowed (1 + maxRevisions)

message: str
timestamp: float
event: Literal["judging:start"]
class V1ResearchEventOutliningEnd:

Envelope for the “outlining:end” event from /v1/research.

data: V1ResearchEventOutliningEndData
message: str
sources_selected: float
timestamp: float
event: Literal["outlining:end"]
class V1ResearchEventOutliningStart:

Envelope for the “outlining:start” event from /v1/research.

data: V1ResearchEventOutliningStartData
message: str
pages_analyzed: float

Total pages analyzed across all iterations

quality_page_count: float

Pages that meet quality threshold (medium+ relevance and reliability)

timestamp: float
event: Literal["outlining:start"]
class V1ResearchEventPlanningEnd:

Envelope for the “planning:end” event from /v1/research.

data: V1ResearchEventPlanningEndData
complexity: Literal["simple", "moderate", "complex"]
One of the following:
"simple"
"moderate"
"complex"
message: str
objective: str
plan: str
queries: List[str]
questions: List[str]
timestamp: float
event: Literal["planning:end"]
class V1ResearchEventPlanningStart:

Envelope for the “planning:start” event from /v1/research.

data: V1ResearchEventPlanningStartData
has_prefetched_context: bool

Whether prefetched user-provided URLs exist for context

message: str
timestamp: float
event: Literal["planning:start"]
class V1ResearchEventPrefetchingEnd:

Envelope for the “prefetching:end” event from /v1/research.

data: V1ResearchEventPrefetchingEndData
failed: float
fetched: float
message: str
timestamp: float
event: Literal["prefetching:end"]
class V1ResearchEventPrefetchingStart:

Envelope for the “prefetching:start” event from /v1/research.

data: V1ResearchEventPrefetchingStartData
message: str
timestamp: float
url_count: float
urls: List[str]
event: Literal["prefetching:start"]
class V1ResearchEventSearchingEnd:

Envelope for the “searching:end” event from /v1/research.

data: V1ResearchEventSearchingEndData
iteration: float
message: str
timestamp: float
urls_found: float
urls_new: float
event: Literal["searching:end"]
class V1ResearchEventSearchingStart:

Envelope for the “searching:start” event from /v1/research.

data: V1ResearchEventSearchingStartData
iteration: float
message: str
queries: List[str]
timestamp: float
event: Literal["searching:start"]
class V1ResearchEventStart:

Envelope for the “start” event from /v1/research.

data: V1ResearchEventStartData

start - Research begins

message: str
timestamp: float
event: Literal["start"]
class V1ResearchEventWritingEnd:

Envelope for the “writing:end” event from /v1/research.

data: V1ResearchEventWritingEndData
attempt: float
message: str
timestamp: float
event: Literal["writing:end"]
class V1ResearchEventWritingStart:

Envelope for the “writing:start” event from /v1/research.

data: V1ResearchEventWritingStartData
attempt: float
is_revision: bool

Whether this is a revision attempt (attempt > 1)

max_attempts: float

Maximum attempts allowed (1 + maxRevisions)

message: str
timestamp: float
previous_score: Optional[float]

Previous judgment score if this is a revision

event: Literal["writing:start"]

Research

import os
from tabstack import Tabstack

client = Tabstack(
    api_key=os.environ.get("TABSTACK_API_KEY"),  # This is the default and can be omitted
)
for agent in client.agent.research(
    query="What are the latest developments in quantum computing?",
):
  print(agent)
Returns Examples