Canso - ML Platform
  • 👋Introduction
  • 🏛️Canso Architecture
  • 💻Getting Started
    • 🏁Overview
    • 🌌Provison K8s Clusters
    • 🚢Install Canso Helm Charts
    • 🐍🔗 Canso Python Client & Web App
    • 📊Health Metrics for Features in the Data Plane
  • 💡Feature Store
    • Data Sources
      • Data Spans
    • Data Sinks
    • ML Features
      • Raw ML Batch Feature
      • Derived ML Batch Feature
      • Raw ML Streaming Feature
      • Custom User Defined Function
  • 💡AI Agents
    • Introduction
    • Getting Started
    • Quickstart
    • Use Cases
      • Fraud Analyst Agent
      • Agent with Memory
      • Memory command examples
    • Concepts
      • Task Server
      • Broker
      • Checkpoint DB
      • Conversation History
      • Memory
    • How Tos
      • Update the AI Agent
      • Delete the AI Agent
    • Toolkit
      • SQL Runner
      • Kubernetes Job
      • Text-to-SQL
    • API Documentation
      • Agent
      • Memory
  • 💡Risk
    • Overview
    • Workflows and Rules
    • Real Time Transaction Monitoring
    • API Documentation
  • 💡Fraud Investigation
    • API Documentation
  • 📝Guides
    • Registry
    • Dry Runs for Batch ML Features
    • Deployment
Powered by GitBook
On this page
  • Core Design Philosophy
  • Setting up the Task Server
  • Tool Tips

Was this helpful?

  1. 💡AI Agents
  2. Concepts

Task Server

PreviousConceptsNextBroker

Last updated 3 months ago

Was this helpful?

The Task Server is a distributed task processing component in Canso's AI Agentic System that empowers AI agents the ability with to execute long running or computationally intensive tasks asynchronously.

The diagram below illustrates how the task server integrates with your AI Agent.

The provided by Canso integrate seamlessly with the Task Server. All you need to do is set up the and the Task Server, which involves executing a simple CLI command, and add the Canso tools to your AI Agent.

Core Design Philosophy

The Task Server implements a fundamental architectural principle: the separation between agent decision-making and task execution. This separation provides several key advantages:

  1. Clean Separation of Concerns

    • Agents focus purely on decision-making and workflow orchestration

    • Task execution is handled independently by specialized workers

    • Clear boundaries between thinking (agents) and doing (tasks)

  2. Scalability and Resource Optimization

    • Agent processes remain lightweight and responsive

    • Compute-intensive tasks are offloaded to appropriate workers

    • Independent scaling of agent instances and task workers

  3. Enhanced Reliability

    • Task failures don't impact agent stability

    • Retry mechanisms are handled separately from agent logic

    • Better error isolation and recovery

This architecture enables AI agents to orchestrate complex workflows while maintaining responsiveness and reliability, making it ideal for production deployments.

Setting up the Task Server

To set up the task server, define a YAML file:

task_server:
  type: celery
  name: task_server
  replicas: 4
  concurrency_per_replica: 1
  broker_resource_name: redis

The table below explains the configuration attribues:

Attribute
Description
Example

type

Type of task server being used

celery

name

Unique name of the task server

agent-task-server

replicas

Number of worker replicas

4

concurrency_per_replica

Tasks per worker

1

broker_resource_name

Associated broker instance

redis

Run the gru command to to set up the task server:

gru component setup --cluster-name <cluster-name> --config-file config.yaml

Tool Tips

Note: Before setting up a task server, setting up a Broker is a prerequisite. See for more details.

See ➡️

Learn about ➡️

Explore ➡

Broker
Broker
Checkpoint DB
Memory
tools
Broker