Quickstart
This guide provides an example of setting up various AI Agentic components, as well as developing and deploying an AI Agent using the Canso AI Agentic System.
We'll create a simple sql-agent
that can execute SQL queries based on natural language prompts.
Prerequisites
Before Proceeding, ensure you have:
A Canso compatible Kubernetes cluster set up.
Canso Helm charts installed on your cluster.
To get started, install Gru by following the instructions here
Setting up the components
Our sql-agent
utilizes CansoSQLRunnerTool, which relies on a Task Server to execute the SQL queries. For orchestration between the agent and the Task Server, we also need a Broker. In addition, the agent uses Checkpoint DB to save its state. Let us set up these components.
To set up the components, we first define a YAML file with the configurations for the components. Save the YAML defined below in a file named config.yaml
.
broker:
type: redis
name: my-redis
checkpoint_db:
type: postgres
name: my-postgres
size: 4Gi
task_server:
type: celery
name: my-task-server
replicas: 1
concurrency_per_replica: 1
broker_resource_name: my-redis
Now we run the gru command to setup the components
gru component setup --cluster-name <name_of_your_cluster> --config-file config.yaml
The Broker, Checkpoint DB and Task Server are now set up in your cluster.
Creating the project bootstrap
Set up the scaffold folder for our sql-agent
project by executing the command:
gru agent create_bootstrap
This will prompt us with a set of configurations for deploying our AI Agent. Provide inputs as specified below:
agent_name (Agent Name): sql-agent
agent_framework (Langgraph): Langgraph
task_server_name: my-task-server
checkpoint_db_name: my-postgres
replicas (1): 1
Once done, we get a folder sql-agent
with the following structure:
sql-agent
βββ .dockerignore # Files to exclude from Docker build
βββ .env # Environment variables for the application
βββ Dockerfile # Docker build file
βββ README.md # Documentation placeholder
βββ config.yaml # Agent configuration settings
βββ requirements.txt # Python dependencies for your agent
βββ src/
βββ main.py # Entry point for the application
Developing the sql-agent
src/main.py
serves a the entrypoint for our application. In this file, we define our AI Agent and wrap it with the CansoLangraphAgent
wrapper.
import os
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langgraph.graph.message import add_messages
from typing import Annotated, Literal, TypedDict
from langgraph.prebuilt import ToolNode
from gru.agents.framework_wrappers.langgraph.agent import CansoLanggraphAgent
from gru.agents.tools.langgraph.sql_runner import CansoSQLRunnerTool
from langgraph.graph import END, StateGraph, START
load_dotenv()
sql_tool = CansoSQLRunnerTool(
db_host=os.getenv("DB_HOST"),
db_port=os.getenv("DB_PORT"),
db_username=os.getenv("DB_USERNAME"),
db_password=os.getenv("DB_PASSWORD"),
db_name=os.getenv("DB_NAME")
)
tools = [sql_tool]
tool_node = ToolNode(tools)
model = ChatOpenAI(model="gpt-4o", temperature=0, max_tokens=None, timeout=None, max_retries=2,)
model = model.bind_tools(tools)
class State(TypedDict):
messages: Annotated[list, add_messages]
def should_continue(state: State) -> Literal["end", "continue"]:
messages = state["messages"]
last_message = messages[-1]
if not last_message.tool_calls:
return "end"
else:
return "continue"
async def call_model(state: State):
messages = state["messages"]
response = await model.ainvoke(messages)
return {"messages": [response]}
workflow = StateGraph(State)
workflow.add_node("agent", call_model)
workflow.add_node("action", tool_node)
workflow.add_edge(START, "agent")
workflow.add_conditional_edges(
"agent",
should_continue,
{
"continue": "action",
"end": END,
},
)
workflow.add_edge("action", "agent")
canso_agent = CansoLanggraphAgent(stateGraph=workflow)
canso_agent.run()
This creates a simple ReAct Agent with Langgraph that uses gpt-4o
as the model. Feel free to replace it with any other model of your choice.
Note that the details SQL DB are read as environment variables. We provide the values for these environment variables in the .env
file.
OPENAI_API_KEY=<your_openai_api_key>
DB_HOST=<your_db_host>
DB_PORT=<your_db_port>
DB_USERNAME=<your_db_username>
DB_PASSWORD=<your_db_password>
DB_NAME=<your_db_name>
Now we build the docker image for our Agent using the generated Dockerfile
and push it to the repository.
docker build -t my-sql-agent:0.0.1 .
docker push my-sql-agent:0.0.1
Registering and Deploying the sql-agent
We run the below commands to register and deploy the sql-agent
in your cluster.
# Register agent
gru agent register . --cluster-name <name_of_your_cluster> --image my-sql-agent:0.0.1
# Deploy agent
gru agent deploy sql-agent
sql-agent
is now deployed in your cluster and ready to receive prompts!
Prompting the sql-agent
To prompt our sql-agent
, we create a file prompt.json
with the prompt.
{
"messages": [
{
"type": "human",
"content": "Create a database table with name cars. It should have 3 columns: brand which will be a string, model which will also be a string and year which will be an integer."
}
]
}
Now we execute the gru command to prompt the agent.
gru agent prompt sql-agent prompt.json
A table name cars
should be created in your database!
Congratulations! You have successfully created and deployed an AI Agent using Canso AI Agentic System!
Last updated
Was this helpful?