Helm Chart
The Cartesian Outpost can be deployed on Kubernetes using our official Helm chart. This method is recommended for Kubernetes environments and provides advanced deployment options, including autoscaling and custom resource management.
Chart Repository
The Helm chart is available on Amazon's Elastic Container Registry:
public.ecr.aws/cartesian/outpost
You can browse available versions at the AWS ECR Gallery.
Prerequisites
- Kubernetes 1.32+ (might work on earlier versions)
- Helm 3.x
- A Redis or Valkey cache instance (required)
Installing the Chart
- Add the Cartesian Helm repository:
helm repo add cartesian public.ecr.aws/cartesian/outpost
helm repo update
- Install the chart:
helm install my-outpost cartesian/outpost \
--set config.outpostProjectId=your-project-id \
--set config.outpostAccessKey.value=your-access-key
Configuration
The following table lists the configurable parameters of the Outpost chart and their default values.
Core Configuration
| Parameter | Description | Default |
|---|---|---|
replicaCount | Number of Outpost replicas | 1 |
image.repository | Outpost container image | public.ecr.aws/cartesian/outpost-backend |
image.pullPolicy | Image pull policy | IfNotPresent |
image.tag | Image tag | The current chart appVersion |
Outpost Configuration
| Parameter | Description | Default |
|---|---|---|
config.outpostProjectId | Project ID (provided by Cartesian) | N/A |
config.outpostAccessKey.value | Outpost access key | N/A |
config.outpostAccessKey.createSecret | Create a Kubernetes secret for the access key | false |
config.outpostAccessKey.secretName | Name of existing secret for access key | N/A |
Telemetry Configuration
By default, the Outpost sends operational telemetry data to Cartesian to help us improve the service and provide better support. This includes performance metrics, usage patterns, and diagnostic information. No sensitive customer data is included in telemetry.
| Parameter | Description | Default |
|---|---|---|
config.telemetry.disableErrorMonitoring | Disable error monitoring telemetry to be sent to Cartesian | false |
config.telemetry.tracing.disabled | Disable telemetry tracing | false |
config.telemetry.tracing.credentials.createSecret | Create a Kubernetes secret for the telemetry tracing ingestion key | false |
config.telemetry.tracing.credentials.secretName | Name of existing secret for telemetry tracing ingestion key | N/A |
config.telemetry.tracing.credentials.values.ingestionKey | Telemetry ingestion key (provided by Cartesian) | N/A |
To disable error monitoring, set config.telemetry.disableErrorMonitoring to true in your values file. To disable telemetry tracing, set config.telemetry.tracing.disabled to true in your values file.
Cache Configuration
| Parameter | Description | Default |
|---|---|---|
config.cache.host | Cache host | N/A |
config.cache.port | Cache port | N/A |
config.cache.password.value | Cache password | N/A |
config.cache.password.createSecret | Create a Kubernetes secret for cache password | false |
config.cache.password.secretName | Name of existing secret for cache password | N/A |
LLM Gateway Configuration
The Outpost supports multiple LLM gateways. Configure one of the following:
AWS Bedrock Configuration
| Parameter | Description | Default |
|---|---|---|
config.bedrock.awsRegion | AWS Region | us-east-1 |
config.bedrock.credentials.createSecret | Create secrets for the credentials | false |
config.bedrock.credentials.values.awsAccessKeyId | AWS Access Key ID | N/A |
config.bedrock.credentials.values.awsSecretAccessKey | AWS Secret Access Key | N/A |
OpenRouter Configuration
| Parameter | Description | Default |
|---|---|---|
config.openrouter.credentials.createSecret | Create a Kubernetes secret for the API key | false |
config.openrouter.credentials.secretName | Name of existing secret for API key | N/A |
config.openrouter.credentials.value | OpenRouter API key (see Getting Started for how to obtain one) | N/A |
Note: When using OpenRouter, you do not need to configure AWS Bedrock credentials.
Azure AI Foundry Configuration
| Parameter | Description | Default |
|---|---|---|
config.azureFoundry.endpoint | Azure AI Foundry endpoint URL (e.g., https://your-resource.cognitiveservices.azure.com/openai/v1/) | N/A |
config.azureFoundry.credentials.createSecret | Create a Kubernetes secret for the API key | false |
config.azureFoundry.credentials.secretName | Name of existing secret for API key | N/A |
config.azureFoundry.credentials.value | Azure AI Foundry API key (optional - if not provided, uses Managed Identity) | N/A |
Authentication Methods:
- Managed Identity (default when API key is not provided): Automatically uses the managed identity assigned to your AKS cluster. Ensure the identity has the
Cognitive Services OpenAI UserorCognitive Services Userrole. - API Key (when
credentials.valueis provided): Uses the provided API key for authentication.
Note: When using Azure AI Foundry, you do not need to configure AWS Bedrock or OpenRouter credentials.
Networking
| Parameter | Description | Default |
|---|---|---|
service.type | Kubernetes service type | ClusterIP |
service.port | Service port | 3001 |
ingress.enabled | Enable ingress | false |
ingress.className | Ingress class name | N/A |
Scaling and Resources
| Parameter | Description | Default |
|---|---|---|
autoscaling.enabled | Enable autoscaling | false |
autoscaling.minReplicas | Minimum replicas | 1 |
autoscaling.maxReplicas | Maximum replicas | 100 |
autoscaling.targetCPUUtilizationPercentage | Target CPU utilization | 80 |
Example Configurations
Basic Installation
This example uses AWS Bedrock with IAM roles (suitable for pods with appropriate IAM permissions via service accounts). For other configurations, add the appropriate values as described below.
# values.yaml
config:
outpostProjectId: 'your-project-id'
outpostAccessKey:
value: 'your-access-key'
cache:
host: 'your-redis-host'
port: '6379'
password:
value: 'your-redis-password'
# Choose one of the following LLM gateways:
# For OpenRouter, uncomment these lines:
# openrouter:
# credentials:
# value: 'your-openrouter-api-key'
# For AWS Bedrock with access keys (if not using IAM roles), uncomment these:
# bedrock:
# awsRegion: 'us-east-1'
# credentials:
# values:
# awsAccessKeyId: 'your-aws-access-key-id'
# awsSecretAccessKey: 'your-aws-secret-access-key'
# For Azure AI Foundry with Managed Identity (on AKS), uncomment this:
# azureFoundry:
# endpoint: 'https://your-resource.cognitiveservices.azure.com/openai/v1/'
# For Azure AI Foundry with API key, uncomment these:
# azureFoundry:
# endpoint: 'https://your-resource.cognitiveservices.azure.com/openai/v1/'
# credentials:
# value: 'your-azure-foundry-api-key'
Production Setup
# values.yaml
replicaCount: 2
config:
outpostProjectId: 'your-project-id'
outpostAccessKey:
createSecret: true
value: 'your-access-key'
cache:
host: 'redis.default.svc.cluster.local'
port: '6379'
password:
createSecret: true
value: 'your-redis-password'
# Choose one of the following LLM gateways:
# For OpenRouter, uncomment these lines:
# openrouter:
# credentials:
# createSecret: true
# value: 'your-openrouter-api-key'
# For AWS Bedrock with access keys (if not using IAM roles), uncomment these:
# bedrock:
# awsRegion: 'us-east-1'
# credentials:
# createSecret: true
# values:
# awsAccessKeyId: 'your-aws-access-key-id'
# awsSecretAccessKey: 'your-aws-secret-access-key'
# For Azure AI Foundry with Managed Identity (on AKS), uncomment this:
# azureFoundry:
# endpoint: 'https://your-resource.cognitiveservices.azure.com/openai/v1/'
# For Azure AI Foundry with API key, uncomment these:
# azureFoundry:
# endpoint: 'https://your-resource.cognitiveservices.azure.com/openai/v1/'
# credentials:
# createSecret: true
# value: 'your-azure-foundry-api-key'
autoscaling:
enabled: true
minReplicas: 2
maxReplicas: 5
targetCPUUtilizationPercentage: 80
ingress:
enabled: true
className: nginx
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt
hosts:
- host: outpost.your-domain.com
paths:
- path: /
pathType: Prefix
tls:
- secretName: outpost-tls
hosts:
- outpost.your-domain.com
LLM Gateway Options:
- AWS Bedrock with IAM Roles: If your Kubernetes pods have appropriate IAM permissions via service accounts (e.g., using IRSA on EKS), no additional configuration is needed—the container will use the pod's IAM role automatically.
- AWS Bedrock with Access Keys: For clusters without IAM service account integration, uncomment and configure the
bedrocksection with your AWS credentials. Consider usingcreateSecret: trueto store credentials securely. - OpenRouter: Uncomment and configure the
openroutersection to use OpenRouter. UsecreateSecret: trueto store the API key securely. - Azure AI Foundry with Managed Identity: If your AKS cluster has a managed identity with appropriate Azure RBAC permissions, uncomment and configure only the
endpointin theazureFoundrysection. The container will automatically use the managed identity. - Azure AI Foundry with API Key: Uncomment and configure the
azureFoundrysection with bothendpointandcredentials. UsecreateSecret: trueto store the API key securely.
Health Monitoring
The chart includes pre-configured liveness and readiness probes that check the /health endpoint. The default configuration is:
livenessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 5
periodSeconds: 5
readinessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 5
periodSeconds: 5
Upgrading
To upgrade an existing installation:
helm repo update
helm upgrade my-outpost cartesian/outpost -f values.yaml
Uninstalling
To uninstall/delete the deployment:
helm uninstall my-outpost
Notes
-
For production deployments:
- Enable autoscaling for high availability
- Configure appropriate resource requests and limits
- Use secrets for sensitive information
- Enable and configure ingress with TLS
- Set up proper monitoring and alerting
-
Security considerations:
- Store sensitive values in Kubernetes secrets
- Use TLS for ingress
- Configure network policies to restrict access
- Regular updates and security patches