Canso - ML Platform
  • πŸ‘‹Introduction
  • πŸ›οΈCanso Architecture
  • πŸ’»Getting Started
    • 🏁Overview
    • 🌌Provison K8s Clusters
    • 🚒Install Canso Helm Charts
    • πŸπŸ”— Canso Python Client & Web App
    • πŸ“ŠHealth Metrics for Features in the Data Plane
  • πŸ’‘Feature Store
    • Data Sources
      • Data Spans
    • Data Sinks
    • ML Features
      • Raw ML Batch Feature
      • Derived ML Batch Feature
      • Raw ML Streaming Feature
      • Custom User Defined Function
  • πŸ’‘AI Agents
    • Introduction
    • Getting Started
    • Quickstart
    • Use Cases
      • Fraud Analyst Agent
      • Agent with Memory
      • Memory command examples
    • Concepts
      • Task Server
      • Broker
      • Checkpoint DB
      • Conversation History
      • Memory
    • How Tos
      • Update the AI Agent
      • Delete the AI Agent
    • Toolkit
      • SQL Runner
      • Kubernetes Job
      • Text-to-SQL
    • API Documentation
      • Agent
      • Memory
  • πŸ’‘Risk
    • Overview
    • Workflows and Rules
    • Real Time Transaction Monitoring
    • API Documentation
  • πŸ’‘Fraud Investigation
    • API Documentation
  • πŸ“Guides
    • Registry
    • Dry Runs for Batch ML Features
    • Deployment
Powered by GitBook
On this page
  • AWS - EKS
  • GCP - GKE
  • Azure - AKS
  • Oracle - OKE
  • Red Hat Openshift

Was this helpful?

  1. Getting Started

Provison K8s Clusters

PreviousOverviewNextInstall Canso Helm Charts

Last updated 11 months ago

Was this helpful?

This guide illustrates how to provision Kubernetes clusters on different cloud providers using Terraform. Jobs, Pipelines, Services, Applications managed by Canso, will run on these provisoned clusters.

These clusters are also referred to as a the Data Plane Cluster. The Data Plane resides in the Customer's/Tenant's Cloud, therefore your data never leaves your cloud environment.

AWS - EKS

The Data Plane cluster (which is compatible with Canso Platform) has the following components

  • VPC and Subnets

  • EKS Cluster

  • IAM/IRSA Roles

  • Drivers such as EFS, EBS etc

  • A production grade DB such as RDS

  • S3 Buckets (These are buckets where Canso jobs will persist outputs and artifacts)

  • AWS Secrets Manager to Store Secrets

  • A key-value store such as Redis

All necessary modules, scripts to provision the above are in this open sourced repository.

Make sure you follow the steps noted in the and please be mindful of the disclaimers and special comments highlighted in there. If you do not have a dedicated DevOps/Infra team, the Canso team can help provision the infra for you. Please reach out to us if you need any assistance and see the for ways to reach out to us.

GCP - GKE

Azure - AKS

Oracle - OKE

Red Hat Openshift


Go back to

Reach out to us

πŸ’»
🌌
canso-data-plane-k8s-cluster-tf
Usage section
Canso Community page
Table of Contents
Canso Community