Execute AI-powered research queries that search the web, analyze sources, and synthesize comprehensive answers. This endpoint always streams responses using Server-Sent Events (SSE).
Streaming Response:
- All responses are streamed using Server-Sent Events (
text/event-stream) - Real-time progress updates as research progresses through phases
Research Modes:
fast- Quick answers with minimal web searches (default)balanced- Standard research with multiple iterations
Use Cases:
- Answering complex questions with cited sources
- Synthesizing information from multiple web sources
- Research reports on specific topics
- Fact-checking and verification tasks
ParametersExpand Collapse
ReturnsExpand Collapse
A Server-Sent Event from /v1/research. Typed discriminated union keyed on event.
A Server-Sent Event from /v1/research. Typed discriminated union keyed on event.
class V1ResearchEventAnalyzingEnd: …Envelope for the “analyzing:end” event from /v1/research.
Envelope for the “analyzing:end” event from /v1/research.
class V1ResearchEventComplete: …Envelope for the “complete” event from /v1/research.
Envelope for the “complete” event from /v1/research.
data: V1ResearchEventCompleteDatacomplete - Research finished successfully
complete - Research finished successfully
metadata: V1ResearchEventCompleteDataMetadataResearch metadata
Note: citedPages, gapEvaluations, outline, and judgments are optional to support fast mode, which skips these phases for maximum speed.
Research metadata
Note: citedPages, gapEvaluations, outline, and judgments are optional to support fast mode, which skips these phases for maximum speed.
mode: Literal["fast", "balanced", "deep", 2 more]Research mode determines depth, thinking budget, and quality controls
Modes (in order of cost/thoroughness):
- fast: Quick answers with minimal validation (~$2, 1 iteration, no judge)
- balanced: Standard research with moderate depth (~$8, 3 iterations, Flash models, no judge)
- deep: Thorough research with judge review (~$15, 5 iterations, Flash models, with judge)
- max: Maximum quality with Pro models (~$40, 5 iterations, Pro models, with judge)
- ultra: Ultimate tier - all Pro models, 10 iterations (expensive, for when accuracy is paramount)
Research mode determines depth, thinking budget, and quality controls
Modes (in order of cost/thoroughness):
- fast: Quick answers with minimal validation (~$2, 1 iteration, no judge)
- balanced: Standard research with moderate depth (~$8, 3 iterations, Flash models, no judge)
- deep: Thorough research with judge review (~$15, 5 iterations, Flash models, with judge)
- max: Maximum quality with Pro models (~$40, 5 iterations, Pro models, with judge)
- ultra: Ultimate tier - all Pro models, 10 iterations (expensive, for when accuracy is paramount)
cited_pages: Optional[List[V1ResearchEventCompleteDataMetadataCitedPage]]Pages cited in the report, ordered by first citation appearance
Pages cited in the report, ordered by first citation appearance
gap_evaluations: Optional[List[V1ResearchEventCompleteDataMetadataGapEvaluation]]
Based on unanswered/partial questions, what specific information is still needed?
question_assessments: List[V1ResearchEventCompleteDataMetadataGapEvaluationQuestionAssessment]Assessment of each research question’s status and findings
Assessment of each research question’s status and findings
research_coverage: Literal["Light", "Moderate", "Solid", "Comprehensive"]Research coverage level - assesses quality across all questions.
Hierarchy: Light < Moderate < Solid < Comprehensive
- Light: Basic info on some questions, most need more depth → Continue
- Moderate: Multiple questions answered, some remain partial → Continue
- Solid: Most questions well-answered with validated sources → Sufficient to stop
- Comprehensive: All questions thoroughly answered, exceptional depth → Definitely stop
Research coverage level - assesses quality across all questions.
Hierarchy: Light < Moderate < Solid < Comprehensive
- Light: Basic info on some questions, most need more depth → Continue
- Moderate: Multiple questions answered, some remain partial → Continue
- Solid: Most questions well-answered with validated sources → Sufficient to stop
- Comprehensive: All questions thoroughly answered, exceptional depth → Definitely stop
Explicit decision: should research continue with another iteration?
- Considers: how many questions unanswered/partial, coverage for mode, remaining iterations
- Drives query generation: true → generate queries, false → stop researching
metrics: Optional[V1ResearchEventCompleteDataMetadataMetrics]Complete research metrics
Complete research metrics
phases: Dict[str, V1ResearchEventCompleteDataMetadataMetricsPhases]Phase timings with duration in milliseconds
Phase timings with duration in milliseconds
class V1ResearchEventError: …Envelope for the “error” event from /v1/research.
Envelope for the “error” event from /v1/research.
class V1ResearchEventEvaluatingEnd: …Envelope for the “evaluating:end” event from /v1/research.
Envelope for the “evaluating:end” event from /v1/research.
class V1ResearchEventFollowingEnd: …Envelope for the “following:end” event from /v1/research.
Envelope for the “following:end” event from /v1/research.
class V1ResearchEventIterationEnd: …Envelope for the “iteration:end” event from /v1/research.
Envelope for the “iteration:end” event from /v1/research.
class V1ResearchEventPrefetchingStart: …Envelope for the “prefetching:start” event from /v1/research.
Envelope for the “prefetching:start” event from /v1/research.
A Server-Sent Event from /v1/research. Typed discriminated union keyed on event.
A Server-Sent Event from /v1/research. Typed discriminated union keyed on event.
class V1ResearchEventAnalyzingEnd: …Envelope for the “analyzing:end” event from /v1/research.
Envelope for the “analyzing:end” event from /v1/research.
class V1ResearchEventComplete: …Envelope for the “complete” event from /v1/research.
Envelope for the “complete” event from /v1/research.
data: V1ResearchEventCompleteDatacomplete - Research finished successfully
complete - Research finished successfully
metadata: V1ResearchEventCompleteDataMetadataResearch metadata
Note: citedPages, gapEvaluations, outline, and judgments are optional to support fast mode, which skips these phases for maximum speed.
Research metadata
Note: citedPages, gapEvaluations, outline, and judgments are optional to support fast mode, which skips these phases for maximum speed.
mode: Literal["fast", "balanced", "deep", 2 more]Research mode determines depth, thinking budget, and quality controls
Modes (in order of cost/thoroughness):
- fast: Quick answers with minimal validation (~$2, 1 iteration, no judge)
- balanced: Standard research with moderate depth (~$8, 3 iterations, Flash models, no judge)
- deep: Thorough research with judge review (~$15, 5 iterations, Flash models, with judge)
- max: Maximum quality with Pro models (~$40, 5 iterations, Pro models, with judge)
- ultra: Ultimate tier - all Pro models, 10 iterations (expensive, for when accuracy is paramount)
Research mode determines depth, thinking budget, and quality controls
Modes (in order of cost/thoroughness):
- fast: Quick answers with minimal validation (~$2, 1 iteration, no judge)
- balanced: Standard research with moderate depth (~$8, 3 iterations, Flash models, no judge)
- deep: Thorough research with judge review (~$15, 5 iterations, Flash models, with judge)
- max: Maximum quality with Pro models (~$40, 5 iterations, Pro models, with judge)
- ultra: Ultimate tier - all Pro models, 10 iterations (expensive, for when accuracy is paramount)
cited_pages: Optional[List[V1ResearchEventCompleteDataMetadataCitedPage]]Pages cited in the report, ordered by first citation appearance
Pages cited in the report, ordered by first citation appearance
gap_evaluations: Optional[List[V1ResearchEventCompleteDataMetadataGapEvaluation]]
Based on unanswered/partial questions, what specific information is still needed?
question_assessments: List[V1ResearchEventCompleteDataMetadataGapEvaluationQuestionAssessment]Assessment of each research question’s status and findings
Assessment of each research question’s status and findings
research_coverage: Literal["Light", "Moderate", "Solid", "Comprehensive"]Research coverage level - assesses quality across all questions.
Hierarchy: Light < Moderate < Solid < Comprehensive
- Light: Basic info on some questions, most need more depth → Continue
- Moderate: Multiple questions answered, some remain partial → Continue
- Solid: Most questions well-answered with validated sources → Sufficient to stop
- Comprehensive: All questions thoroughly answered, exceptional depth → Definitely stop
Research coverage level - assesses quality across all questions.
Hierarchy: Light < Moderate < Solid < Comprehensive
- Light: Basic info on some questions, most need more depth → Continue
- Moderate: Multiple questions answered, some remain partial → Continue
- Solid: Most questions well-answered with validated sources → Sufficient to stop
- Comprehensive: All questions thoroughly answered, exceptional depth → Definitely stop
Explicit decision: should research continue with another iteration?
- Considers: how many questions unanswered/partial, coverage for mode, remaining iterations
- Drives query generation: true → generate queries, false → stop researching
metrics: Optional[V1ResearchEventCompleteDataMetadataMetrics]Complete research metrics
Complete research metrics
phases: Dict[str, V1ResearchEventCompleteDataMetadataMetricsPhases]Phase timings with duration in milliseconds
Phase timings with duration in milliseconds
class V1ResearchEventError: …Envelope for the “error” event from /v1/research.
Envelope for the “error” event from /v1/research.
class V1ResearchEventEvaluatingEnd: …Envelope for the “evaluating:end” event from /v1/research.
Envelope for the “evaluating:end” event from /v1/research.
class V1ResearchEventFollowingEnd: …Envelope for the “following:end” event from /v1/research.
Envelope for the “following:end” event from /v1/research.
class V1ResearchEventIterationEnd: …Envelope for the “iteration:end” event from /v1/research.
Envelope for the “iteration:end” event from /v1/research.
class V1ResearchEventPrefetchingStart: …Envelope for the “prefetching:start” event from /v1/research.
Envelope for the “prefetching:start” event from /v1/research.
Research
import os
from tabstack import Tabstack
client = Tabstack(
api_key=os.environ.get("TABSTACK_API_KEY"), # This is the default and can be omitted
)
for agent in client.agent.research(
query="What are the latest developments in quantum computing?",
):
print(agent)