Skip to main content

Getting Started

The Cartesian Outpost is a secure, self-hosted component that enables you to keep your customer data within your own infrastructure while leveraging Cartesian's AI capabilities. It acts as a bridge between your application and Cartesian's cloud services, ensuring that sensitive data remains under your control while still allowing you to provide personalized AI experiences to your users.

Prerequisites

Cache

The Cartesian Outpost requires a Redis or Valkey cache instance. The cache stores information regarding user session states and is required for the correct operation of the Outpost. Data stored in the cache is not production-critical if lost.

If you have an existing cache cluster you can use it with the Cartesian Outpost, or you can install a new cache instance. You can also use managed Redis-compatible cache solutions such as Amazon ElastiCache or Azure Cache.

Installing a cache instance is out of scope of this document. For implementation guidance you can refer to these resources:

LLM Gateways

The Cartesian Outpost requires an LLM gateway to power AI interactions.

Note: We support a growing list of providers. Contact us if you require adding an additional one.

AWS Bedrock

AWS Bedrock allows Cartesian to process customer data within your AWS environment, eliminating the need to transfer customer service data outside your infrastructure. This is the recommended option when running on AWS infrastructure.

Bedrock supports authentication via IAM roles (recommended for AWS infrastructure) or access keys (for non-AWS environments). See the deployment method documentation for detailed configuration instructions.

OpenRouter

OpenRouter provides a unified API to access multiple LLM providers. This is a good option when you need flexibility in model selection or are not running on AWS infrastructure.

To obtain an OpenRouter API key:

  1. Sign up for an account at OpenRouter
  2. Navigate to the API Keys section in your dashboard
  3. Generate a new API key
  4. Use this key when configuring the Outpost (see deployment method documentation for details)

Azure AI Foundry

Azure AI Foundry allows Cartesian to process customer data within your Azure environment. This is the recommended option when running on Azure infrastructure.

Azure AI Foundry supports two authentication methods:

  • Managed Identity (recommended for Azure infrastructure) - Automatic authentication with no secrets to manage
  • API Key (for simple deployments or non-Azure environments)

To set up Azure AI Foundry:

  1. Create an Azure AI Foundry resource in your Azure subscription
  2. Deploy a model (e.g., GPT-4) in your Azure AI Foundry resource
  3. Note the endpoint URL (e.g., https://your-resource.cognitiveservices.azure.com/openai/v1/)
  4. Choose your authentication method:
    • Managed Identity: Assign the Cognitive Services OpenAI User or Cognitive Services User role to your AKS cluster's managed identity, Azure Container Instance identity, or VM identity
    • API Key: Copy the API key from the Azure portal
  5. Use these credentials when configuring the Outpost (see deployment method documentation for details)

Integration Options

The Cartesian Outpost can be deployed in your infrastructure using one of two methods:

1. Docker Container Hosting

Docker container hosting is suitable when you:

  • Want a simple, straightforward deployment
  • Are running on a single server or a small cluster
  • Don't need advanced orchestration features
  • Are using container hosting platforms like AWS ECS, Azure Container Instances, or similar services

This method provides more direct control over the container configuration and is ideal for simple deployments.

2. Helm Chart

The Helm chart deployment is recommended when you:

  • Are running a Kubernetes cluster
  • Need advanced scaling and orchestration capabilities
  • Want automated updates and rollbacks
  • Require integration with Kubernetes-native monitoring and logging
  • Need to manage multiple Outpost instances efficiently

The Helm chart provides a production-ready configuration with best practices for running the Outpost in a Kubernetes environment.

Choose the integration option that best fits your infrastructure and operational requirements. The following sections provide detailed instructions for each deployment method.