# Getting Started

This guide provides a general overview of the using the Canso AI Agentic System, introducing key concepts and explaining how they interconnect to simplify your AI agent’s development and deployment process.

## Prerequisites

The Canso AI Agentic System is built on the foundation of [Canso Architecture](https://docs.canso.ai/architecture). Before Proceeding, ensure you have:

1. A [Canso compatible Kubernetes cluster](https://docs.canso.ai/getting-started/provision-k8s-cluster) set up.
2. [Canso Helm charts](https://docs.canso.ai/getting-started/canso-helm-charts) installed on your cluster.

To get started, install Gru by following the instructions [here](https://docs.canso.ai/getting-started/canso-py-client)

## Setting up the components

Deploying an AI agent involves more than just deploying the agent itself; it also requires deploying the various components the agent depends on for its operation. These may include:

1. A [Checkpoint DB](https://docs.canso.ai/ai-agents/concepts/db) to save execution checkpoints,
2. A [Broker](https://docs.canso.ai/ai-agents/concepts/broker) and A [Task Server](https://docs.canso.ai/ai-agents/concepts/task-server) to support asynchronous execution of long running tasks.

To set up the components,

1. define a YAML file containing the configurations for each component to be deployed.

   Example - config.yaml:

   ```yaml
   broker:
     type: redis
     name: my-redis
   checkpoint_db:
     type: postgres
     name: my-postgres
     size: 4Gi
   task_server:
     type: celery
     name: my-task-server
     replicas: 4
     concurrency_per_replica: 1
     broker_resource_name: my-redis
   ```
2. Run the `gru` command to setup the components

   ```python
   gru component setup --cluster-name <name_of_your_cluster> --config-file config.yaml
   ```

That's it! The components are now deployed in your cluster and ready to be integrated with your AI Agent.

**Note**: You can also choose to set up the components individually by creating a separate YAML file for each component and executing the setup command with the respective files.

## Creating the project bootstrap

Set up the scaffold folder for your AI agent project by executing the command:

```bash
gru agent create_bootstrap
```

This will prompt you for a set of configurations for deploying your AI Agent. For eg.

```bash
agent_name (Agent Name): my-agent
agent_framework (Langgraph): Langgraph        
task_server_name: my-task-server
checkpoint_db_name: my-postgres
replicas (1): 1
```

The `task_server_name` and `checkpoint_db_name` specified here correspond to the names assigned when creating these components in the previous step. This ensures the Canso AI Agentic system to connect your agent with the appropriate Checkpoint DB and Task Server.

After providing the required inputs, a bootstrap project folder is generated with the following structure:

```
.
├── .dockerignore           # Files to exclude from Docker build
├── .env                    # Environment variables for the application
├── Dockerfile              # Docker build file
├── README.md               # Documentation placeholder
├── config.yaml             # Agent configuration settings
├── requirements.txt        # Python dependencies for your agent
└── src/
    └── main.py             # Entry point for the application
```

## Development and Image Build

Inside the created folder, define your AI agent and wrap it using the wrappers provided by Canso. All Python files should be placed inside the `src` folder, with `src/main.py` serving as the entry point for the application.

In `src/main.py`, ensure your agent is wrapped with the Canso Agent Wrappers. For instance, if you’re creating the agent using Langgraph, your `src/main.py` should include something like the following:

```python
from gru.agents import CansoLanggraphAgent

.... Your Agent Code ....

canso_agent = CansoLanggraphAgent(stateGraph=<your langgraph agent>)
canso_agent.run()
```

Add the environment variables needed by your agent in `.env` file and update configurations in `config.yaml` if needed.

Create a Docker image and push it to a container registry:

```bash
docker build -t my-agent-image:tag .
docker push my-agent-image:tag
```

## Register and Deploy Agent

Run the below commands to register your agent and deploy it

```bash
# Register agent
gru agent register . --cluster-name <name_of_your_cluster> --image my-agent-image:tag

# Deploy agent
gru agent deploy my-agent
```

Your agent is now deployed in your cluster and ready to receive prompts!

## Sending prompts to the agent

To send prompts to your agent, create a JSON file containing the prompt and use the following command:

```python
gru agent prompt my-agent <path_to_your_json_file>
```

The prompt is sent to be processed by your AI Agent.

## Next Steps

* Read the [Quickstart](https://docs.canso.ai/ai-agents/quickstart) guide to develop and deploy a simple agent in your cluster.
