Skip to main content

Helm Chart

The Cartesian Outpost can be deployed on Kubernetes using our official Helm chart. This method is recommended for Kubernetes environments and provides advanced deployment options, including autoscaling and custom resource management.

Chart Repository

The Helm chart is available on Amazon's Elastic Container Registry:

public.ecr.aws/cartesian/outpost

You can browse available versions at the AWS ECR Gallery.

Prerequisites

  • Kubernetes 1.32+ (might work on earlier versions)
  • Helm 3.x
  • A Redis or Valkey cache instance (required)

Installing the Chart

  1. Add the Cartesian Helm repository:
helm repo add cartesian public.ecr.aws/cartesian/outpost
helm repo update
  1. Install the chart:
helm install my-outpost cartesian/outpost \
--set config.outpostProjectId=your-project-id \
--set config.outpostAccessKey.value=your-access-key

Configuration

The following table lists the configurable parameters of the Outpost chart and their default values.

Core Configuration

ParameterDescriptionDefault
replicaCountNumber of Outpost replicas1
image.repositoryOutpost container imagepublic.ecr.aws/cartesian/outpost-backend
image.pullPolicyImage pull policyIfNotPresent
image.tagImage tagThe current chart appVersion

Outpost Configuration

ParameterDescriptionDefault
config.outpostProjectIdProject ID (provided by Cartesian)N/A
config.outpostAccessKey.valueOutpost access keyN/A
config.outpostAccessKey.createSecretCreate a Kubernetes secret for the access keyfalse
config.outpostAccessKey.secretNameName of existing secret for access keyN/A

Telemetry Configuration

By default, the Outpost sends operational telemetry data to Cartesian to help us improve the service and provide better support. This includes performance metrics, usage patterns, and diagnostic information. No sensitive customer data is included in telemetry.

ParameterDescriptionDefault
config.telemetry.disableErrorMonitoringDisable error monitoring telemetry to be sent to Cartesianfalse
config.telemetry.tracing.disabledDisable telemetry tracingfalse
config.telemetry.tracing.credentials.createSecretCreate a Kubernetes secret for the telemetry tracing ingestion keyfalse
config.telemetry.tracing.credentials.secretNameName of existing secret for telemetry tracing ingestion keyN/A
config.telemetry.tracing.credentials.values.ingestionKeyTelemetry ingestion key (provided by Cartesian)N/A

To disable error monitoring, set config.telemetry.disableErrorMonitoring to true in your values file. To disable telemetry tracing, set config.telemetry.tracing.disabled to true in your values file.

Cache Configuration

ParameterDescriptionDefault
config.cache.hostCache hostN/A
config.cache.portCache portN/A
config.cache.password.valueCache passwordN/A
config.cache.password.createSecretCreate a Kubernetes secret for cache passwordfalse
config.cache.password.secretNameName of existing secret for cache passwordN/A

LLM Gateway Configuration

The Outpost supports multiple LLM gateways. Configure one of the following:

AWS Bedrock Configuration

ParameterDescriptionDefault
config.bedrock.awsRegionAWS Regionus-east-1
config.bedrock.credentials.createSecretCreate secrets for the credentialsfalse
config.bedrock.credentials.values.awsAccessKeyIdAWS Access Key IDN/A
config.bedrock.credentials.values.awsSecretAccessKeyAWS Secret Access KeyN/A

OpenRouter Configuration

ParameterDescriptionDefault
config.openrouter.credentials.createSecretCreate a Kubernetes secret for the API keyfalse
config.openrouter.credentials.secretNameName of existing secret for API keyN/A
config.openrouter.credentials.valueOpenRouter API key (see Getting Started for how to obtain one)N/A

Note: When using OpenRouter, you do not need to configure AWS Bedrock credentials.

Azure AI Foundry Configuration

ParameterDescriptionDefault
config.azureFoundry.endpointAzure AI Foundry endpoint URL (e.g., https://your-resource.cognitiveservices.azure.com/openai/v1/)N/A
config.azureFoundry.credentials.createSecretCreate a Kubernetes secret for the API keyfalse
config.azureFoundry.credentials.secretNameName of existing secret for API keyN/A
config.azureFoundry.credentials.valueAzure AI Foundry API key (optional - if not provided, uses Managed Identity)N/A

Authentication Methods:

  • Managed Identity (default when API key is not provided): Automatically uses the managed identity assigned to your AKS cluster. Ensure the identity has the Cognitive Services OpenAI User or Cognitive Services User role.
  • API Key (when credentials.value is provided): Uses the provided API key for authentication.

Note: When using Azure AI Foundry, you do not need to configure AWS Bedrock or OpenRouter credentials.

Networking

ParameterDescriptionDefault
service.typeKubernetes service typeClusterIP
service.portService port3001
ingress.enabledEnable ingressfalse
ingress.classNameIngress class nameN/A

Scaling and Resources

ParameterDescriptionDefault
autoscaling.enabledEnable autoscalingfalse
autoscaling.minReplicasMinimum replicas1
autoscaling.maxReplicasMaximum replicas100
autoscaling.targetCPUUtilizationPercentageTarget CPU utilization80

Example Configurations

Basic Installation

This example uses AWS Bedrock with IAM roles (suitable for pods with appropriate IAM permissions via service accounts). For other configurations, add the appropriate values as described below.

# values.yaml
config:
outpostProjectId: 'your-project-id'
outpostAccessKey:
value: 'your-access-key'
cache:
host: 'your-redis-host'
port: '6379'
password:
value: 'your-redis-password'
# Choose one of the following LLM gateways:
# For OpenRouter, uncomment these lines:
# openrouter:
# credentials:
# value: 'your-openrouter-api-key'
# For AWS Bedrock with access keys (if not using IAM roles), uncomment these:
# bedrock:
# awsRegion: 'us-east-1'
# credentials:
# values:
# awsAccessKeyId: 'your-aws-access-key-id'
# awsSecretAccessKey: 'your-aws-secret-access-key'
# For Azure AI Foundry with Managed Identity (on AKS), uncomment this:
# azureFoundry:
# endpoint: 'https://your-resource.cognitiveservices.azure.com/openai/v1/'
# For Azure AI Foundry with API key, uncomment these:
# azureFoundry:
# endpoint: 'https://your-resource.cognitiveservices.azure.com/openai/v1/'
# credentials:
# value: 'your-azure-foundry-api-key'

Production Setup

# values.yaml
replicaCount: 2

config:
outpostProjectId: 'your-project-id'
outpostAccessKey:
createSecret: true
value: 'your-access-key'
cache:
host: 'redis.default.svc.cluster.local'
port: '6379'
password:
createSecret: true
value: 'your-redis-password'
# Choose one of the following LLM gateways:
# For OpenRouter, uncomment these lines:
# openrouter:
# credentials:
# createSecret: true
# value: 'your-openrouter-api-key'
# For AWS Bedrock with access keys (if not using IAM roles), uncomment these:
# bedrock:
# awsRegion: 'us-east-1'
# credentials:
# createSecret: true
# values:
# awsAccessKeyId: 'your-aws-access-key-id'
# awsSecretAccessKey: 'your-aws-secret-access-key'
# For Azure AI Foundry with Managed Identity (on AKS), uncomment this:
# azureFoundry:
# endpoint: 'https://your-resource.cognitiveservices.azure.com/openai/v1/'
# For Azure AI Foundry with API key, uncomment these:
# azureFoundry:
# endpoint: 'https://your-resource.cognitiveservices.azure.com/openai/v1/'
# credentials:
# createSecret: true
# value: 'your-azure-foundry-api-key'

autoscaling:
enabled: true
minReplicas: 2
maxReplicas: 5
targetCPUUtilizationPercentage: 80

ingress:
enabled: true
className: nginx
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt
hosts:
- host: outpost.your-domain.com
paths:
- path: /
pathType: Prefix
tls:
- secretName: outpost-tls
hosts:
- outpost.your-domain.com

LLM Gateway Options:

  • AWS Bedrock with IAM Roles: If your Kubernetes pods have appropriate IAM permissions via service accounts (e.g., using IRSA on EKS), no additional configuration is needed—the container will use the pod's IAM role automatically.
  • AWS Bedrock with Access Keys: For clusters without IAM service account integration, uncomment and configure the bedrock section with your AWS credentials. Consider using createSecret: true to store credentials securely.
  • OpenRouter: Uncomment and configure the openrouter section to use OpenRouter. Use createSecret: true to store the API key securely.
  • Azure AI Foundry with Managed Identity: If your AKS cluster has a managed identity with appropriate Azure RBAC permissions, uncomment and configure only the endpoint in the azureFoundry section. The container will automatically use the managed identity.
  • Azure AI Foundry with API Key: Uncomment and configure the azureFoundry section with both endpoint and credentials. Use createSecret: true to store the API key securely.

Health Monitoring

The chart includes pre-configured liveness and readiness probes that check the /health endpoint. The default configuration is:

livenessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 5
periodSeconds: 5

readinessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 5
periodSeconds: 5

Upgrading

To upgrade an existing installation:

helm repo update
helm upgrade my-outpost cartesian/outpost -f values.yaml

Uninstalling

To uninstall/delete the deployment:

helm uninstall my-outpost

Notes

  1. For production deployments:

    • Enable autoscaling for high availability
    • Configure appropriate resource requests and limits
    • Use secrets for sensitive information
    • Enable and configure ingress with TLS
    • Set up proper monitoring and alerting
  2. Security considerations:

    • Store sensitive values in Kubernetes secrets
    • Use TLS for ingress
    • Configure network policies to restrict access
    • Regular updates and security patches