Overview
The platform is built to support complex, production-grade workflows across teams, systems, and AI-driven processes. This release focuses on strengthening that foundation, making workflows more consistent, outputs more reliable, and the overall system more stable under real usage. We’ve optimized how workflows execute, how reports are surfaced, and how integrations behave, so everything runs more predictably from build to execution.
⚙️ Workflow Execution: Predictable from Test → Production
Before
- Test runs didn’t always match production behavior
- Edits (especially multi-step changes) could apply inconsistently
- Draft and published workflows could drift out of sync
Now
- Test and live runs follow the same execution path
- Workflow state and configuration stay fully aligned
- Changes apply cleanly across both draft and published versions
Why it matters
You can iterate without second-guessing.
What you test is what actually runs.
📊 Reports: Accurate, Immediate, and Aligned
Before
- Reports sometimes lagged or required refresh
- Duplicate reports could appear during updates
- Reports didn’t always reflect the latest workflow state
Now
- Reports generate immediately after execution
- Each run produces exactly one report
- Reports stay tied to the exact version of the workflow that ran
Why it matters
No more guessing what happened, reports now reflect reality, instantly.
🔌 Integrations: Reliable Across Systems
Before
- Slack messages could fail or format incorrectly
- Gmail workflows were inconsistent across runs
- Timezone/date handling caused incorrect outputs
Now
- Slack delivery and formatting are consistent
- Gmail-based workflows execute reliably end-to-end
- Time and date logic is standardized across integrations
Why it matters
Workflows now behave reliably outside the platform, not just inside it.
🤖 AI Workflows: From Variable → Repeatable
Before
- Deep Research outputs varied significantly between runs
- References and results didn’t always load consistently
- Prompts didn’t reliably map to outputs
Now
- Repeated runs produce more consistent outputs
- References and results render reliably
- Prompt-to-output alignment is tighter and more predictable
Why it matters
AI workflows are now stable enough for real use cases — not just experimentation.
⚡ Performance: Faster Under Real Load
We’ve grouped a set of performance improvements that collectively impact day-to-day usage:
- Faster workflow execution times
- Improved responsiveness under concurrent usage
- Reduced interruptions in integration-heavy workflows
Why it matters
The system holds up under actual usage, not just isolated runs.
🧩 UI & Interaction: Less Friction While Building
We’ve refined core interaction patterns across the platform:
- Clearer naming across workflows and components
- More consistent behavior across create, edit, and draft flows
- Improved UI responsiveness and state accuracy
Why it matters
Building workflows feels more predictable and less error-prone.
🛠️ Stability Fixes
We’ve resolved a set of edge cases affecting execution, reporting, and UI behavior:
- Fixed inconsistencies in workflow execution and report generation
- Improved tool visibility and behavior across workflows
- Resolved UI state issues across editing and chat flows
Why it matters
Fewer edge cases → more trust in daily usage.