Use Cases
Introduction
Canso AI Agentic Systems provides a robust platform for deploying AI agents that can automate complex workflows and decision-making processes. At its core, the platform currently offers two powerful tools:
SQLRunnerTool: A tool for executing SQL queries against supported databases, enabling data retrieval, analysis, and updates. It provides secure, efficient database operations with built-in connection pooling and error handling.
KubernetesJobTool: A tool for managing containerized workloads in isolated environments, allowing parallel processing and scalable computations. It handles resource allocation, job scheduling, and execution monitoring.
NOTE: You can also create your own custom tools using the platform's extensible framework. This allows you to tailor tools to meet your unique requirements or integrate with specialized systems.
The following use cases demonstrate how these tools can be combined to build production-grade AI agent applications, focusing on risk analysis and machine learning operations.
Risk Analysis and Fraud Detection
Transaction Analysis
AI agents can analyze transaction patterns in real-time to identify potential fraud using a combination of database queries and computational jobs:
Query historical transaction data to establish baseline behavior
Run real-time comparisons against patterns
Execute risk scoring algorithms as Kubernetes jobs
Store results for audit trails
Example scenario: An agent monitors credit card transactions, using SQLRunnerTool to query recent transaction history:
SELECT
user_id,
COUNT(*) as tx_count,
AVG(amount) as avg_amount,
STDDEV(amount) as std_amount,
COUNT(DISTINCT merchant_category) as unique_categories,
MAX(amount) - MIN(amount) as amount_range
FROM transactions
WHERE timestamp >= NOW() - INTERVAL '1 hour'
AND user_id IN (SELECT user_id FROM high_risk_users)
GROUP BY user_id
HAVING COUNT(*) > 10
OR MAX(amount) > 5000
The agent then uses KubernetesJobTool to run risk scoring algorithms on flagged transactions:
job:
name: risk-score-calculation
container:
image: risk-scoring:v1
resources:
memory: "2Gi"
cpu: "1"
env:
- name: TRANSACTION_DATA
value: "{{ sql_result }}"
- name: RISK_THRESHOLD
value: "0.85"
volumeMounts:
- name: risk-models
mountPath: /models
volumes:
- name: risk-models
persistentVolumeClaim:
claimName: risk-model-store
Rule-Based Decision Engine
Implement and manage fraud detection rules with dynamic updates and scalable processing:
Store rules in SQL databases
Execute rule evaluation in isolated containers
Scale rule processing based on transaction volume
Update rules dynamically based on new patterns
Example scenario: An agent evaluates transaction rules by using SQLRunnerTool to fetch active rules:
WITH rule_parameters AS (
SELECT
rule_id,
rule_logic,
thresholds,
priority,
last_updated
FROM fraud_rules
WHERE status = 'active'
AND business_unit = 'credit_cards'
AND enabled = true
ORDER BY priority DESC
)
SELECT r.*,
m.model_path,
m.version
FROM rule_parameters r
LEFT JOIN rule_models m ON r.rule_id = m.rule_id
WHERE m.status = 'deployed'
The agent then uses KubernetesJobTool to evaluate these rules against transaction batches:
job:
name: rule-evaluation
replicas: "{{ transaction_volume_scale }}"
container:
image: rule-engine:v2
resources:
memory: "4Gi"
cpu: "2"
env:
- name: RULES_CONFIG
value: "{{ sql_result }}"
- name: BATCH_SIZE
value: "1000"
volumeMounts:
- name: rules-output
mountPath: /output
volumes:
- name: rules-output
persistentVolumeClaim:
claimName: rules-evaluation-store
Machine Learning Operations
Model Deployment
Streamline model deployment process with automated validation and monitoring:
Query model performance metrics
Run model validation jobs
Execute A/B tests
Monitor deployment health
Example scenario: An agent manages model deployment using SQLRunnerTool to check performance metrics:
WITH model_metrics AS (
SELECT
model_id,
version,
AVG(accuracy) as avg_accuracy,
AVG(latency_ms) as avg_latency,
COUNT(DISTINCT prediction_id) as prediction_count
FROM model_predictions
WHERE timestamp >= NOW() - INTERVAL '24 hours'
GROUP BY model_id, version
)
SELECT
m.*,
CASE
WHEN avg_accuracy < 0.85 OR avg_latency > 100 THEN 'fail'
ELSE 'pass'
END as health_check
FROM model_metrics m
The agent then uses KubernetesJobTool to handle model deployment:
job:
name: model-deployment
container:
image: model-deployer:v1
resources:
memory: "8Gi"
cpu: "4"
gpu: "1"
env:
- name: MODEL_ID
value: "{{ model_id }}"
- name: VERSION
value: "{{ version }}"
- name: DEPLOYMENT_TYPE
value: "{{ 'canary' if is_new_model else 'full' }}"
volumeMounts:
- name: model-storage
mountPath: /models
- name: deployment-config
mountPath: /config
volumes:
- name: model-storage
persistentVolumeClaim:
claimName: model-registry
- name: deployment-config
configMap:
name: deployment-parameters
Future Enhancements
The platform roadmap includes enhancements such as:
Additional built-in tools for advanced data preprocessing, real-time analytics, and integration with emerging AI frameworks
Last updated
Was this helpful?