Run AI Agents Reliably
Agent loops that survive crashes and resume exactly where they left off. Perfect for multi-step research, code generation, and autonomous workflows.
Resume from Any Point
Agent crashes mid-research? No problem. Flovyn resumes exactly where it stopped without losing context or duplicating work. Every LLM call, tool invocation, and intermediate result is durably persisted.
Time-Travel Debugging
Replay agent executions with different inputs or models. See exactly what decisions the agent made at each step. Understand token usage, latency, and reasoning paths.
Real-time Token Streaming
Stream LLM tokens to your UI as they're generated. Progress updates and intermediate results via Server-Sent Events. Build responsive UIs that show agent thinking in real-time.
@workflow(name="research-agent")
class ResearchAgent:
async def run(self, ctx: WorkflowContext, query: Query) -> Report:
results = []
while len(results) < 10:
# Each search is a durable task (string-based reference)
search = await ctx.execute_task("web-search", {"query": query.text})
results.append(search)
# LLM decision - streams tokens in real-time
decision = await ctx.execute_task("llm-decide", {"results": results})
if decision["complete"]:
break
return await ctx.execute_task("compile-report", {"results": results})Why Durable Execution for AI Agents?
AI agents make multiple LLM calls, execute tools, and maintain state across long-running sessions. Without durable execution, a crash means starting over from scratch—losing all context, repeating expensive API calls, and frustrating users.
Flovyn records every step of agent execution to PostgreSQL. When a worker restarts, it replays the execution log and resumes from exactly where it left off. No duplicate LLM calls, no lost intermediate results.
Combined with real-time streaming, you can build agent UIs that show thinking progress, intermediate results, and gracefully handle interruptions.