Canso - ML Platform
  • πŸ‘‹Introduction
  • πŸ›οΈCanso Architecture
  • πŸ’»Getting Started
    • 🏁Overview
    • 🌌Provison K8s Clusters
    • 🚒Install Canso Helm Charts
    • πŸπŸ”— Canso Python Client & Web App
    • πŸ“ŠHealth Metrics for Features in the Data Plane
  • πŸ’‘Feature Store
    • Data Sources
      • Data Spans
    • Data Sinks
    • ML Features
      • Raw ML Batch Feature
      • Derived ML Batch Feature
      • Raw ML Streaming Feature
      • Custom User Defined Function
  • πŸ’‘AI Agents
    • Introduction
    • Getting Started
    • Quickstart
    • Use Cases
      • Fraud Analyst Agent
      • Agent with Memory
      • Memory command examples
    • Concepts
      • Task Server
      • Broker
      • Checkpoint DB
      • Conversation History
      • Memory
    • How Tos
      • Update the AI Agent
      • Delete the AI Agent
    • Toolkit
      • SQL Runner
      • Kubernetes Job
      • Text-to-SQL
    • API Documentation
      • Agent
      • Memory
  • πŸ’‘Risk
    • Overview
    • Workflows and Rules
    • Real Time Transaction Monitoring
    • API Documentation
  • πŸ’‘Fraud Investigation
    • API Documentation
  • πŸ“Guides
    • Registry
    • Dry Runs for Batch ML Features
    • Deployment
Powered by GitBook
On this page
  • Prerequisites
  • Setting up the components
  • Creating the project bootstrap
  • Developing the sql-agent
  • Registering and Deploying the sql-agent
  • Prompting the sql-agent

Was this helpful?

  1. πŸ’‘AI Agents

Quickstart

PreviousGetting StartedNextUse Cases

Last updated 3 months ago

Was this helpful?

This guide provides an example of setting up various AI Agentic components, as well as developing and deploying an AI Agent using the Canso AI Agentic System.

We'll create a simple sql-agent that can execute SQL queries based on natural language prompts.

Prerequisites

Before Proceeding, ensure you have:

  1. A set up.

  2. installed on your cluster.

To get started, install Gru by following the instructions

Setting up the components

Our sql-agent utilizes , which relies on a to execute the SQL queries. For orchestration between the agent and the Task Server, we also need a . In addition, the agent uses to save its state. Let us set up these components.

To set up the components, we first define a YAML file with the configurations for the components. Save the YAML defined below in a file named config.yaml.

broker:
    type: redis
    name: my-redis
checkpoint_db:
    type: postgres
    name: my-postgres
    size: 4Gi
task_server:
    type: celery
    name: my-task-server
    replicas: 1
    concurrency_per_replica: 1
    broker_resource_name: my-redis

Now we run the gru command to setup the components

gru component setup --cluster-name <name_of_your_cluster> --config-file config.yaml

The Broker, Checkpoint DB and Task Server are now set up in your cluster.

Creating the project bootstrap

Set up the scaffold folder for our sql-agent project by executing the command:

gru agent create_bootstrap

This will prompt us with a set of configurations for deploying our AI Agent. Provide inputs as specified below:

agent_name (Agent Name): sql-agent
agent_framework (Langgraph): Langgraph        
task_server_name: my-task-server
checkpoint_db_name: my-postgres
replicas (1): 1

Once done, we get a folder sql-agent with the following structure:

sql-agent
β”œβ”€β”€ .dockerignore           # Files to exclude from Docker build
β”œβ”€β”€ .env                    # Environment variables for the application
β”œβ”€β”€ Dockerfile              # Docker build file
β”œβ”€β”€ README.md               # Documentation placeholder
β”œβ”€β”€ config.yaml             # Agent configuration settings
β”œβ”€β”€ requirements.txt        # Python dependencies for your agent
└── src/
    └── main.py             # Entry point for the application

Developing the sql-agent

src/main.py serves a the entrypoint for our application. In this file, we define our AI Agent and wrap it with the CansoLangraphAgent wrapper.

import os
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langgraph.graph.message import add_messages
from typing import Annotated, Literal, TypedDict
from langgraph.prebuilt import ToolNode

from gru.agents.framework_wrappers.langgraph.agent import CansoLanggraphAgent
from gru.agents.tools.langgraph.sql_runner import CansoSQLRunnerTool
from langgraph.graph import END, StateGraph, START

load_dotenv()

sql_tool = CansoSQLRunnerTool(
    db_host=os.getenv("DB_HOST"),
    db_port=os.getenv("DB_PORT"),
    db_username=os.getenv("DB_USERNAME"),
    db_password=os.getenv("DB_PASSWORD"),
    db_name=os.getenv("DB_NAME")
)

tools = [sql_tool]
tool_node = ToolNode(tools)

model = ChatOpenAI(model="gpt-4o", temperature=0,  max_tokens=None, timeout=None, max_retries=2,)
model = model.bind_tools(tools)

class State(TypedDict):
    messages: Annotated[list, add_messages]

def should_continue(state: State) -> Literal["end", "continue"]:
    messages = state["messages"]
    last_message = messages[-1]
    if not last_message.tool_calls:
        return "end"
    else:
        return "continue"

async def call_model(state: State):
    messages = state["messages"]
    response = await model.ainvoke(messages)
    return {"messages": [response]}


workflow = StateGraph(State)

workflow.add_node("agent", call_model)
workflow.add_node("action", tool_node)

workflow.add_edge(START, "agent")

workflow.add_conditional_edges(
    "agent",
    should_continue,
    {
        "continue": "action",
        "end": END,
    },
)

workflow.add_edge("action", "agent")

canso_agent = CansoLanggraphAgent(stateGraph=workflow)
canso_agent.run()

Note that the details SQL DB are read as environment variables. We provide the values for these environment variables in the .env file.

OPENAI_API_KEY=<your_openai_api_key>
DB_HOST=<your_db_host>
DB_PORT=<your_db_port>
DB_USERNAME=<your_db_username>
DB_PASSWORD=<your_db_password>
DB_NAME=<your_db_name>

Now we build the docker image for our Agent using the generated Dockerfile and push it to the repository.

docker build -t my-sql-agent:0.0.1 .
docker push my-sql-agent:0.0.1

Registering and Deploying the sql-agent

We run the below commands to register and deploy the sql-agent in your cluster.

# Register agent
gru agent register . --cluster-name <name_of_your_cluster> --image my-sql-agent:0.0.1

# Deploy agent
gru agent deploy sql-agent

sql-agent is now deployed in your cluster and ready to receive prompts!

Prompting the sql-agent

To prompt our sql-agent, we create a file prompt.json with the prompt.

{
    "messages": [
        {
            "type": "human",
            "content": "Create a database table with name cars. It should have 3 columns: brand which will be a string, model which will also be a string and year which will be an integer."
        }
    ]
}

Now we execute the gru command to prompt the agent.

gru agent prompt sql-agent prompt.json

A table name cars should be created in your database!

Congratulations! You have successfully created and deployed an AI Agent using Canso AI Agentic System!

This creates a simple that uses gpt-4o as the model. Feel free to replace it with any other model of your choice.

Canso compatible Kubernetes cluster
Canso Helm charts
here
CansoSQLRunnerTool
Task Server
Broker
Checkpoint DB
ReAct Agent with Langgraph