SyGra Studio adds a visual builder for synthetic data workflows in SyGra 2.0.0
ServiceNow’s SyGra 2.0.0 ships Studio, a canvas-based UI to design, run, and debug synthetic data flows with live execution metrics and exportable configs.

Key Takeaways
- SyGra Studio (SyGra 2.0.0) provides a canvas UI to build synthetic data workflows while still exporting SyGra-compatible configs.
- Teams can preview dataset rows first, then use column-derived state variables directly inside prompts to reduce prompt wiring errors.
- Executions stream node-level status plus token usage, latency, and cost, with run history saved locally for audits and comparisons.
- Existing repo workflows (like a critique loop over the Glaive Code Assistant dataset) can be opened and run inside Studio.
Synthetic dataset generation is moving from config files to visual tooling, and ServiceNow is pushing that shift with SyGra Studio—an interactive UI in SyGra 2.0.0 aimed at making LLM-driven pipelines easier to build, inspect, and rerun with governance in mind.
Visual workflow design for synthetic data generation and prompt engineering
SyGra Studio turns a SyGra task into a canvas: teams can assemble nodes (for example, LLM steps for drafting and summarizing) without hand-editing YAML. For marketers and e-commerce operators, the immediate win is iteration speed—prompt engineering becomes less about switching between terminals and more about tweaking prompts with inline variable hints pulled from the dataset schema.
Studio supports guided model setup across common providers and runtimes (including OpenAI, Azure OpenAI, Ollama, Vertex, Bedrock, and vLLM), then lets users connect data sources like Hugging Face datasets or local files and preview rows before committing a run. Column names from the preview become state variables you can reference directly inside prompts, which reduces wiring errors when multiple steps depend on shared context.
Execution observability, debugging, and exportable configs for automation teams
Where Studio targets B2B operators is observability. Runs stream node-by-node status, latency, token usage, and estimated cost, with execution history written to local artifacts (for example, under .executions/). There are also developer-focused controls—inline logs, breakpoints, and a Monaco-based editor—so you can debug failures without leaving the UI.
Importantly, Studio doesn’t replace the underlying framework: it generates the same SyGra-compatible graph config and task executor scripts you would commit in a repo. That means teams can prototype in UI, then operationalize the output in CI or other automation contexts.
For a concrete reference, ServiceNow points to an existing workflow that consumes the Glaive Code Assistant dataset from Hugging Face and iterates answers until a critique loop returns “NO MORE FEEDBACK,” using a conditional edge to route revisions or exit cleanly. Dataset reference: glaiveai/glaive-code-assistant-v2. A walkthrough and release context are available via SyGra 2.0.0.
The bigger signal: as AI teams standardize synthetic data as a repeatable asset, tooling that combines previews, versionable configs, and run-level metrics is becoming table stakes—not a nice-to-have.
Stay Informed
Weekly AI marketing insights
Join 5,000+ marketers. Unsubscribe anytime.
