Getting Started
This guide provides a general overview of the using the Canso AI Agentic System, introducing key concepts and explaining how they interconnect to simplify your AI agentβs development and deployment process.
Prerequisites
The Canso AI Agentic System is built on the foundation of Canso Architecture. Before Proceeding, ensure you have:
A Canso compatible Kubernetes cluster set up.
Canso Helm charts installed on your cluster.
To get started, install Gru by following the instructions here
Setting up the components
Deploying an AI agent involves more than just deploying the agent itself; it also requires deploying the various components the agent depends on for its operation. These may include:
A Checkpoint DB to save execution checkpoints,
A Broker and A Task Server to support asynchronous execution of long running tasks.
To set up the components,
define a YAML file containing the configurations for each component to be deployed.
Example - config.yaml:
Run the
gru
command to setup the components
That's it! The components are now deployed in your cluster and ready to be integrated with your AI Agent.
Note: You can also choose to set up the components individually by creating a separate YAML file for each component and executing the setup command with the respective files.
Creating the project bootstrap
Set up the scaffold folder for your AI agent project by executing the command:
This will prompt you for a set of configurations for deploying your AI Agent. For eg.
The task_server_name
and checkpoint_db_name
specified here correspond to the names assigned when creating these components in the previous step. This ensures the Canso AI Agentic system to connect your agent with the appropriate Checkpoint DB and Task Server.
After providing the required inputs, a bootstrap project folder is generated with the following structure:
Development and Image Build
Inside the created folder, define your AI agent and wrap it using the wrappers provided by Canso. All Python files should be placed inside the src
folder, with src/main.py
serving as the entry point for the application.
In src/main.py
, ensure your agent is wrapped with the Canso Agent Wrappers. For instance, if youβre creating the agent using Langgraph, your src/main.py
should include something like the following:
Add the environment variables needed by your agent in .env
file and update configurations in config.yaml
if needed.
Create a Docker image and push it to a container registry:
Register and Deploy Agent
Run the below commands to register your agent and deploy it
Your agent is now deployed in your cluster and ready to receive prompts!
Sending prompts to the agent
To send prompts to your agent, create a JSON file containing the prompt and use the following command:
The prompt is sent to be processed by your AI Agent.
Next Steps
Read the Quickstart guide to develop and deploy a simple agent in your cluster.
Last updated
Was this helpful?