Skip to main content

Docker Container

This guide explains how to deploy the Cartesian Outpost using Docker containers. This method is ideal for simpler deployments.

Docker Image

The Cartesian Outpost Docker image is available on Amazon's Elastic Container Registry (ECR):

public.ecr.aws/cartesian/outpost-backend

You can browse all available versions at the AWS ECR Gallery.

Prerequisites

Reverse Proxy (Required)

The Outpost must be deployed behind a reverse proxy (such as Nginx or Apache) that handles:

  • TLS termination (HTTPS)
  • Request routing
  • Load balancing when running multiple instances (Optional)

Example Nginx configuration:

server {
listen 443 ssl;
server_name outpost.your-domain.com;

# SSL configuration
ssl_certificate /path/to/your/certificate.crt;
ssl_certificate_key /path/to/your/private.key;

# Modern SSL configuration
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers off;

location / {
proxy_pass http://localhost:3001;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}

Scaling Requirements

For production deployments, consider the following scaling requirements:

  1. Multiple Instances: Deploy multiple Outpost instances to handle increased load and provide high availability

    • Requires Redis/Valkey cache configuration
    • Use a load balancer to distribute traffic
    • Recommended minimum of 2 instances for high availability
  2. Load Balancing: When running multiple instances, ensure your reverse proxy or load balancer:

    • Distributes traffic evenly across instances
    • Handles health checks appropriately
    • Removes unhealthy instances from rotation

Container Configuration

Environment Variables

The following environment variables are required to configure the Outpost:

Required Configuration

All elements below are provided by Cartesian.

  • OUTPOST_PROJECT_ID: Required for authenticating the Outpost against the Cartesian Cloud
  • OUTPOST_ACCESS_KEY: Your Outpost access key

Cache Configuration

For production installations with more than one Outpost instance, you must configure a Redis or Valkey cache:

  • OUTPOST_CACHE_TYPE: Set to "redis" for both Redis and Valkey
  • OUTPOST_CACHE_HOST: The hostname of your cache instance
  • OUTPOST_CACHE_PORT: The port number of your cache instance
  • OUTPOST_CACHE_PASSWORD: The password for your cache instance (if required)

AWS Bedrock Configuration

The Outpost can authenticate with AWS Bedrock in two ways:

  1. Using IAM Roles (Recommended for AWS Infrastructure)

    • Automatically handles authentication when running on AWS services (EC2, EKS, ECS)
    • No additional configuration required
  2. Using Access Keys

    • Required when running outside AWS infrastructure
    • Configure using these environment variables:
      • BEDROCK_AWS_REGION: defaults to "us-east-1"
      • BEDROCK_AWS_ACCESS_KEY_ID
      • BEDROCK_AWS_SECRET_ACCESS_KEY

Note: Required IAM Permissions

For either authentication method, the IAM role or user will need the following policy:

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ModelInvocation",
"Effect": "Allow",
"Action": ["bedrock:InvokeModel", "bedrock:InvokeModelWithResponseStream"],
"Resource": "arn:aws:bedrock:*::foundation-model/*"
}
]
}

This policy allows invoking any Bedrock foundation model and using streaming responses.

Optional Configuration

  • OUTPOST_SERVICE_URL: The Cartesian Cloud URL
  • OUTPOST_ENABLE_ERROR_MONITORING: Enable error monitoring info to be sent to Cartesian (default: false)
  • OUTPOST_ENABLE_TELEMETRY: Enable telemetry metrics to be sent to Cartesian (default: false)
  • CARTESIAN_LOG_FORMAT: Set to "container" for container friendly logging format
  • PORT: The port on which the Outpost service will listen (default: 3001)

Deployment Examples

Basic Docker Run

docker run -d \
-p 3001:3001 \
-e OUTPOST_PROJECT_ID="your-project-id" \
-e OUTPOST_ACCESS_KEY="your-access-key" \
public.ecr.aws/cartesian/outpost-backend:latest

Production Docker Compose with Load Balancing

version: '3.8'
services:
nginx:
image: nginx:latest
ports:
- '443:443'
volumes:
- ./nginx.conf:/etc/nginx/conf.d/default.conf:ro
- ./ssl:/etc/nginx/ssl:ro
depends_on:
- outpost
restart: unless-stopped

outpost:
image: public.ecr.aws/cartesian/outpost-backend:latest
deploy:
replicas: 2 # Deploy multiple instances
expose:
- '3001'
environment:
OUTPOST_PROJECT_ID: your-project-id
OUTPOST_ACCESS_KEY: your-access-key
OUTPOST_CACHE_TYPE: redis
OUTPOST_CACHE_HOST: redis
OUTPOST_CACHE_PORT: 6379
OUTPOST_CACHE_PASSWORD: your-redis-password
CARTESIAN_LOG_FORMAT: container
restart: unless-stopped
depends_on:
- redis

redis:
image: redis:7
command: redis-server --requirepass your-redis-password
ports:
- '6379:6379'
volumes:
- redis-data:/data
restart: unless-stopped

volumes:
redis-data:

Health Checks

The Outpost container exposes an HTTP endpoint for health monitoring:

  • Liveness probe: GET /health

Security Considerations

  1. TLS Configuration (Required)

    • Always terminate TLS at your reverse proxy
    • Use modern TLS versions (1.2 and 1.3)
    • Regularly update SSL certificates
    • Follow security best practices for cipher configuration
  2. Access Control

    • Always use secure passwords for Redis/Valkey cache instances
    • Store sensitive environment variables (like OUTPOST_ACCESS_KEY) using your platform's secrets management system
    • Follow the principle of least privilege when setting up service accounts and permissions
  3. Network Security

    • Place the Outpost behind a reverse proxy
    • Configure appropriate firewall rules
    • Restrict direct access to the Outpost containers
    • Limit Redis access to only the Outpost instances