Canso - ML Platform
  • πŸ‘‹Introduction
  • πŸ›οΈCanso Architecture
  • πŸ’»Getting Started
    • 🏁Overview
    • 🌌Provison K8s Clusters
    • 🚒Install Canso Helm Charts
    • πŸπŸ”— Canso Python Client & Web App
    • πŸ“ŠHealth Metrics for Features in the Data Plane
  • πŸ’‘Feature Store
    • Data Sources
      • Data Spans
    • Data Sinks
    • ML Features
      • Raw ML Batch Feature
      • Derived ML Batch Feature
      • Raw ML Streaming Feature
      • Custom User Defined Function
  • πŸ’‘AI Agents
    • Introduction
    • Getting Started
    • Quickstart
    • Use Cases
      • Fraud Analyst Agent
      • Agent with Memory
      • Memory command examples
    • Concepts
      • Task Server
      • Broker
      • Checkpoint DB
      • Conversation History
      • Memory
    • How Tos
      • Update the AI Agent
      • Delete the AI Agent
    • Toolkit
      • SQL Runner
      • Kubernetes Job
      • Text-to-SQL
    • API Documentation
      • Agent
      • Memory
  • πŸ’‘Risk
    • Overview
    • Workflows and Rules
    • Real Time Transaction Monitoring
    • API Documentation
  • πŸ’‘Fraud Investigation
    • API Documentation
  • πŸ“Guides
    • Registry
    • Dry Runs for Batch ML Features
    • Deployment
Powered by GitBook
On this page
  • Prerequisites
  • Setting up the components
  • Creating the project bootstrap
  • Development and Image Build
  • Register and Deploy Agent
  • Sending prompts to the agent
  • Next Steps

Was this helpful?

  1. πŸ’‘AI Agents

Getting Started

PreviousIntroductionNextQuickstart

Last updated 6 months ago

Was this helpful?

This guide provides a general overview of the using the Canso AI Agentic System, introducing key concepts and explaining how they interconnect to simplify your AI agent’s development and deployment process.

Prerequisites

The Canso AI Agentic System is built on the foundation of . Before Proceeding, ensure you have:

  1. A set up.

  2. installed on your cluster.

To get started, install Gru by following the instructions

Setting up the components

Deploying an AI agent involves more than just deploying the agent itself; it also requires deploying the various components the agent depends on for its operation. These may include:

  1. A to save execution checkpoints,

  2. A and A to support asynchronous execution of long running tasks.

To set up the components,

  1. define a YAML file containing the configurations for each component to be deployed.

    Example - config.yaml:

    broker:
      type: redis
      name: my-redis
    checkpoint_db:
      type: postgres
      name: my-postgres
      size: 4Gi
    task_server:
      type: celery
      name: my-task-server
      replicas: 4
      concurrency_per_replica: 1
      broker_resource_name: my-redis
  2. Run the gru command to setup the components

    gru component setup --cluster-name <name_of_your_cluster> --config-file config.yaml

That's it! The components are now deployed in your cluster and ready to be integrated with your AI Agent.

Note: You can also choose to set up the components individually by creating a separate YAML file for each component and executing the setup command with the respective files.

Creating the project bootstrap

Set up the scaffold folder for your AI agent project by executing the command:

gru agent create_bootstrap

This will prompt you for a set of configurations for deploying your AI Agent. For eg.

agent_name (Agent Name): my-agent
agent_framework (Langgraph): Langgraph        
task_server_name: my-task-server
checkpoint_db_name: my-postgres
replicas (1): 1

The task_server_name and checkpoint_db_name specified here correspond to the names assigned when creating these components in the previous step. This ensures the Canso AI Agentic system to connect your agent with the appropriate Checkpoint DB and Task Server.

After providing the required inputs, a bootstrap project folder is generated with the following structure:

.
β”œβ”€β”€ .dockerignore           # Files to exclude from Docker build
β”œβ”€β”€ .env                    # Environment variables for the application
β”œβ”€β”€ Dockerfile              # Docker build file
β”œβ”€β”€ README.md               # Documentation placeholder
β”œβ”€β”€ config.yaml             # Agent configuration settings
β”œβ”€β”€ requirements.txt        # Python dependencies for your agent
└── src/
    └── main.py             # Entry point for the application

Development and Image Build

Inside the created folder, define your AI agent and wrap it using the wrappers provided by Canso. All Python files should be placed inside the src folder, with src/main.py serving as the entry point for the application.

In src/main.py, ensure your agent is wrapped with the Canso Agent Wrappers. For instance, if you’re creating the agent using Langgraph, your src/main.py should include something like the following:

from gru.agents import CansoLanggraphAgent

.... Your Agent Code ....

canso_agent = CansoLanggraphAgent(stateGraph=<your langgraph agent>)
canso_agent.run()

Add the environment variables needed by your agent in .env file and update configurations in config.yaml if needed.

Create a Docker image and push it to a container registry:

docker build -t my-agent-image:tag .
docker push my-agent-image:tag

Register and Deploy Agent

Run the below commands to register your agent and deploy it

# Register agent
gru agent register . --cluster-name <name_of_your_cluster> --image my-agent-image:tag

# Deploy agent
gru agent deploy my-agent

Your agent is now deployed in your cluster and ready to receive prompts!

Sending prompts to the agent

To send prompts to your agent, create a JSON file containing the prompt and use the following command:

gru agent prompt my-agent <path_to_your_json_file>

The prompt is sent to be processed by your AI Agent.

Next Steps

Read the guide to develop and deploy a simple agent in your cluster.

Canso Architecture
Canso compatible Kubernetes cluster
Canso Helm charts
here
Checkpoint DB
Broker
Task Server
Quickstart