Docs / Get Started / Live

Live

The Live tab streams the outer three observation layers – Parsed Output, Execution, and Observation – in real time as the pipeline runs. Each agent role gets a color-coded header so you can follow the flow at a glance.

Outer Layers

For each pipeline stage, the Live tab surfaces three layers as they complete:

  • Parsed Output – structured fields extracted from the raw response (action, tool, verdict, scores, etc.)
  • Execution – tool calls, search results, and intermediate state
  • Observation – what the agent observed from tool execution (chunk counts, relevance scores, quality labels)

CLI-Style Output

Events render as a vertical tree inspired by Claude Code's output format. Stage headers use a bullet marker (), and sub-items are indented with arc connectors (). The pure-black background (#000) and monospace font stack (Berkeley Mono, JetBrains Mono, Fira Code, Cascadia Code) keep the focus on the data.

Role Colors

Each agent role has a distinct color applied to its stage header and related events:

  • ReAct – cyan (#66cccc)
  • Grader – yellow (#cccc66)
  • Judge – green (#66cc66)
  • Fallback – pink (#cc66cc)

Streaming Events

The backend sends Server-Sent Events (SSE) via the /query/stream endpoint as each pipeline stage completes. Seven event types flow through the stream:

  • status – pipeline stage changes (classified, grading, judging, fallback)
  • stepReAct agent reasoning steps (tool calls + observations)
  • graderGrader chunk relevance scores and filtering results
  • judgeJudge verdict (ACCEPT/RETRY), confidence, answer or feedback
  • fallbackFallback LLM completion when judge defers
  • done – final QueryResponse with full pipeline metrics
  • error – error details if the pipeline fails

Spinner

While a stage is running, a Braille spinner rotates inline with the stage header at 80ms per frame. Each stage (ReAct, Grader, Judge, Fallback) has its own spinner instance. The spinner is replaced with final metrics once the stage completes.

Settings Bar

Above the output, the settings bar provides per-agent model dropdowns and think toggles. Models are fetched dynamically from the Ollama Cloud API and each dropdown maps to a pipeline role. Model and think changes take effect on the next query.

Token Metrics

Every stage header displays token counts (prompt + completion) once the stage finishes. A total token count and tokens-per-second rate appear in the Done banner at the end of the pipeline.