Kubernetes Installation Guide

Prerequisites

  • A Kubernetes cluster (v1.24 or later) with access to your MySQL instance (v8.0+)
  • Minimum cluster resources: 4 vCPUs, 16GB RAM
  • Database credentials with read/write and delete access on the tables
  • Helm (version 3.17.0 or later)
  • Domain names for API, Console and IdP endpoints

Installation Steps

To access Helm charts and container images from the Authlete registry, follow these steps:

Setup Phase

1. Create an Organization

  • Log in to the Authlete Console.
  • Create an organization for your company.
  • Note down the Organization ID.
new-org

2. Request Access

  • Share the Organization ID and Organization Name with Authlete Support.
  • Authlete will authorize registry access for your organization.

3. Generate Organization Token

  • In the Authlete Console, generate a Token for your organization.
  • Keep the Organization ID and Token handy for authentication.
user-permissions

Preparation Phase

1. Create a Kubernetes Namespace

kubectl create ns authlete

2. Login to the Helm Registry

Use the following command to log in:

helm registry login -u <ORG_ID> -p <TOKEN> artifacts.authlete.com

Replace <ORG_ID> and with your actual values.

Once logged in, you can pull Helm charts from the registry.

3. Pull Helm Chart

Authlete Helm chart is distributed using the OCI format. Use the following command to pull and extract the chart locally:

#Current stable version is 2.0.0
helm pull oci://artifacts.authlete.com/authlete-platform-chart --version 2.0.1 --untar
cd authlete-platform-chart

4. Image Registry Setup

You have two options for accessing Authlete container images:

Option A: Direct Registry Access (Development/Evaluation)

For development or evaluation environments, you can pull images directly from Authlete’s registry:

# Create a secret for registry authentication
kubectl create secret docker-registry authlete-registry \
-n authlete \
--docker-server=artifacts.authlete.com \
--docker-username=<ORG_ID> \
--docker-password=<TOKEN>

# Configure the default ServiceAccount to use this secret
kubectl patch serviceaccount default \
-n authlete \
-p '{"imagePullSecrets": [{"name": "authlete-registry"}]}'

Option B: Mirror Images (Production Recommended) For production environments, see the Mirror Images section for instructions on mirroring images to your private registry.

5. Mirror Images

For improved reliability and control, we recommend customers mirror Authlete-provided container images to their own container registry. This avoids direct runtime dependency on Authlete’s registry, and ensures reproducible deployments.

Image Description Supported Version Tags
server Core API server that handles OAuth 2.0 and OpenID Connect operations 3.0.19
server-db-schema Database schema initialization tool for the API server v3.0.19
idp Identity Provider server for user authentication and management 1.0.18
idp-db-schema Database schema initialization tool for the IdP server v1.0.18
console React based management console for platform configuration and monitoring 1.0.11
nginx Nginx-based reverse proxy for handling TLS termination and routing 1.26.3
valkey Caching service for improved performance and reduced database load 8.0.1
gce-proxy Cloud SQL proxy for secure database connections in GCP environments 1.37.0
authlete-bootstrapper Initialization service for the platform. Only used during first deployment. 1.1.0
Mirror using docker
# Authenticate to Authlete registry
docker login artifacts.authlete.com -u <ORG_ID> -p <TOKEN>

# Pull an image
docker pull artifacts.authlete.com/<image>:<tag>

# Tag and push to your own registry
docker tag artifacts.authlete.com/<image>:<tag> registry.mycompany.com/<image>:<tag>
docker push registry.mycompany.com/<image>:<tag>

Update your values.yaml to use the mirrored image paths before running the installation.

Altnernatively, if you can use crane if you want to directly push the images to your own registry.

Mirror using crane
# Set your target registry base
TARGET_REGISTRY="ghcr.io/your-org-name"

# Image copy commands
crane cp artifacts.authlete.com/server:3.0.19 $TARGET_REGISTRY/server:3.0.19
crane cp artifacts.authlete.com/server-db-schema:v3.0.19 $TARGET_REGISTRY/server-db-schema:v3.0.19
crane cp artifacts.authlete.com/idp:1.0.18 $TARGET_REGISTRY/idp:1.0.18
crane cp artifacts.authlete.com/idp-db-schema:v1.0.18 $TARGET_REGISTRY/idp-db-schema:v1.0.18
crane cp artifacts.authlete.com/console:1.0.11 $TARGET_REGISTRY/console:1.0.11
crane cp artifacts.authlete.com/nginx:1.26.3 $TARGET_REGISTRY/nginx:1.26.3
crane cp artifacts.authlete.com/valkey:8.0.1 $TARGET_REGISTRY/valkey:8.0.1
crane cp artifacts.authlete.com/gce-proxy:1.37.0 $TARGET_REGISTRY/gce-proxy:1.37.0
crane cp artifacts.authlete.com/authlete-bootstrapper:1.1.0 $TARGET_REGISTRY/authlete-bootstrapper:1.1.0

Configuration Phase

1. Configure Values and Secrets

  • The default values.yaml is already bundled inside the chart. You can inspect or modify it for custom configurations.

  • Update the global.repo to your own registry.

global:
  id: "authlete-platform"
  repo: "registry.your-company.com"  # Required: Your container registry
  • Update the domains section with your domain names:
  # Required: These domains must be accessible from your users
  api: "api.your-domain.com"     # API server
  idp: "login.your-domain.com"   # IdP server
  console: "console.your-domain.com"  # Management console

2. Configure TLS Certificates

  • Get a TLS certificate for your domain (e.g. from your certificate authority or Let’s Encrypt), and create a Kubernetes secret of type kubernetes.io/tls in the authlete namespace before installation. The certificate should cover all domains used by the platform (e.g. api.example.com, login.example.com, console.example.com). You can use a wildcard or a SAN certificate.
kubectl create secret tls proxy-certs \
  --cert=./tls.crt \
  --key=./tls.key \
  -n authlete

3. Enabling TLS connection to database and memory cache (Optional)

Application Configuration

TLS encryption can be enabled with simple configuration changes.

Redis: If Redis connection requires TLS encryption, configure rediss:// in the MEMCACHE_HOST value:

api:
  env:
    - name: MEMCACHE_ENABLE
      value: "true"
    - name: MEMCACHE_HOST
      value: "rediss://<username>:<password>@host:port"  # Can be hostname or IP address
      ...

For IdP deployment:

idp:
  env:
  - name: JAVA_OPTS
    value: "-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8008"
  - name: AUTHLETE_IDP_REMOTE_CACHE_ENDPOINT
    value: "rediss://<username>:<password>@host:port"  # Can be hostname or IP address
  - name: CLUSTER_NAME
    ...

Database: TLS connection to the database can be enabled with sslMode property:

database:
  idp:  # IdP Server Database
    name: idp
    user: authlete
    password: !raw *****
    host: localhost
    connectionParams:
      allowPublicKeyRetrieval: true
      sslMode: verify-ca # Configure the sslMode here.
  api: # API Server Database
    name: server
    user: authlete
    password: !raw ******
    host: localhost
    connectionParams:
      allowPublicKeyRetrieval: true
      sslMode: verify-ca # Configure the sslMode here.

Note: When using Cloud SQL Proxy, sslMode should be set to disable because the connection is already encrypted by the proxy. Enabling an additional layer of TLS will not work.

Note: If the certificate presented by Redis or Database over TLS is issued by a private CA, you should create a custom bundle containing all certificates and load the bundle into the applications. Please refer to the Private CA section for more details.

  1. Private CA (Optional)

To use TLS with private CA certificate, a custom bundle containing all necessary certificates needs to be prepared outside of the Helm Chart. Next, we will create a ConfigMap that contains the bundles, which satisfies the following contract:

  • ConfigMap must be named “custom-ca-bundle”
  • It must contain at least 2 keys - ca-bundle.p12 and ca-bundle.crt
Creating CA Bundle

We recommend extracting the public CA bundle as described in the trust-manager documentation. Below, we explain how to create a ConfigMap with the bundle.

Creating bundle with trust-manager (Optional)

Trust manager makes it easy to manage trust bundles in Kubernetes. For installation instructions please consult trust-manager’s official documentation. Once trust-manager is installed, you will need to create a single bundle object. The ConfigMap containting all necessary certificates will be then generated automatically.

Here is an example Bundle from which trust-manager will assemble the final bundle. This Bundle’s manifest file name is assumed to be bundle-creation.yaml

apiVersion: trust.cert-manager.io/v1alpha1
kind: Bundle
metadata:
  name: custom-ca-bundle
spec:
  sources:
  - useDefaultCAs: true

  # A manually specified PEM-encoded cert, included directly into the Bundle
  - inLine: |
      -----BEGIN CERTIFICATE-----
      MIIC5zCCAc+gAwIBAgIBADANBgkqhkiG9w0BAQsFADAVMRMwEQYDVQQDEwprdWJl
      ...
      ... contents of proxy's CA certificate ...
      -----END CERTIFICATE-----      
  - inLine: |
      -----BEGIN CERTIFICATE-----
      ... contents of mysql's CA certificate ...
      -----END CERTIFICATE-----      
  - inLine: |
      -----BEGIN CERTIFICATE-----
      ... contents of valkey's CA certificate ...
      -----END CERTIFICATE-----      
  target:
    # All ConfigMaps will include a PEM-formatted bundle, here named "root-certs.pem"
    # and in this case we also request binary formatted bundles in PKCS#12 format,
    # here named "bundle.p12".
    configMap:
      key: "ca-bundle.crt"
    additionalFormats:
      pkcs12:
        key: "ca-bundle.p12"
    namespaceSelector:
      matchLabels:
        kubernetes.io/metadata.name: "authlete" # Deployment namespace

Finally, create the Bundle with

kubectl create -f bundle-creation.yaml -n authlete

Note: Please note that if you’re using an official cert-manager-provided Debian trust package, you should check regularly to ensure the trust package is kept up to date.

Manual creation of Bundle

The custom Bundle can created in a few simple steps by using docker commands.

Note: The tutorial below demonstrates sample code only. Customers are responsible for creating, storing, and updating bundles securely.

  1. Create file named gen-bundles.sh
# Input/output directories
SRC="/certs"                                         # Location to place crt files
DST="/usr/local/share/ca-certificates/custom"        # Directory monitored by update-ca-certificates
OUT="/bundle"                                        # Output directory for the bundle

# Clean up output directory
rm -f $OUT/*

# Create output directories
mkdir -p "$DST"

# Install required packages (ca-certificates-java is necessary for generating the Java truststore)
apt-get -y update
apt-cache madison ca-certificates
DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends openssl default-jre-headless ca-certificates-java "ca-certificates=20230311+deb12u1"

# 1) Place additional certificates (*.pem / *.crt placed as .crt)
cp -v ${SRC}/* ${DST}/
# If you want to unify the extension so that .pem files are also treated as .crt, enable the following
# for f in "$DST"/*.pem; do mv -v "$f" "${f%.pem}.crt"; done

# 2) Update system & Java truststore
update-ca-certificates -f

# 3) Output (PEM and PKCS12)
cp -v /etc/ssl/certs/ca-certificates.crt "$OUT/ca-bundle.crt"
keytool -importkeystore -srckeystore /etc/ssl/certs/java/cacerts -destkeystore $OUT/ca-bundle.p12 -deststoretype PKCS12 -srcstorepass changeit -deststorepass changeit
  1. Prepare certificates to trust Place the additional trusted certificates (one or more files) under the certs directory with .crt extension.
   ./certs/
     ├── sql-ca.crt
     ├── redis-ca.crt
     └── ...
  1. Build the bundle
docker run --rm --user root -v "$PWD/certs:/certs:ro" -v "$PWD/out:/bundle" -v "$PWD/gen-bundles.sh:/usr/local/bin/gen-bundles.sh:ro" --entrypoint bash docker.io/library/debian:12-slim -c "sh /usr/local/bin/gen-bundles.sh"
  1. After preparing ca-bundle.p12 and ca-bundle.crt files, you can create the kubernetes ConfigMap with
kubectl create cm custom-ca-bundle --from-file=ca-bundle.p12 --from-file=ca-bundle.crt

Note: : The current debian base image used for the trust-manager’s bundle creation is docker.io/library/debian:12-slim and ca-certificates target version is at 20230311+deb12u1.0

Loading the custom bundles from the application

Now the bundle is created and is ready for use, we will need to configure one more properties to make the bundles available for applications. Set .Values.global.tls.caBundle.enabled to “true” so that the bundle is mounted into the application pods.

global:
  tls:
    caBundle:
      enabled: true # Enable this
      type: ConfigMap
      name: custom-ca-bundle
      trustStore:
        password: "changeit"
Updating the Bundle

If you created a new bundle and replaced the old bundle with it, you should restart IdP, Authlete Server and proxy pods(applications). On a restart, applications will load the updated bundle.

5. Configure Database Connection

The platform requires two databases: one for the API server and one for the IdP server. Configure the connection details in secret-values.yaml:

Note: For MySQL 8.0+, ensure your databases are configured with:

  • Character set: utf8mb4
  • Collation: utf8mb4_0900_ai_ci
  • A template secret-values.yaml file is also included in the chart archive. Modify secret-values.yaml with your database and Authlete admin credentials.
database:
  idp:  # IdP Server Database
    name: idp           # Database name
    user: authlete      # Database user
    password: !raw ***** # User password
    host: localhost     # Database host
    connectionParams:
      allowPublicKeyRetrieval: true
      sslMode: disable # disable|trust|verify-ca|verify-full
  api: # API Server Database
    name: server
    user: authlete
    password: !raw ******
    host: localhost
    connectionParams:
      allowPublicKeyRetrieval: true
      sslMode: disable # disable|trust|verify-ca|verify-full

idp:
  auth:
    adminUser:
      email: "admin@authlete.com"
      password: !raw ******
  encryptionSecret: ********

For GCP Cloud SQL:

  cloudSql:
    enabled: true
    image: gce-proxy:1.37.0
    instance: project:region:instance  # Your Cloud SQL instance
    port: 3306

For other cloud providers, disable Cloud SQL proxy and use direct connection:

cloudSql:
  enabled: false

6. Configure Caching

The platform supports two caching configuration options:

Option 1: Default Valkey Cache (Built-in)

By default, the platform deploys a Valkey pod as the caching solution. This configuration is ready to use out of the box and is defined in values.yaml:

redis:
  name: redis
  enabled: true
  image: valkey:8.0.1 
  # -- Resources requirements
  resources:
    requests:
      cpu: 10m
      memory: 440M
    limits:
      cpu: 1
      memory: 440M

7. Other configuration options

Proxy configuration

If your domain name is very long, you might hit a limit on map_hash_max_size parameter. You can work around this problem by increasing the values of mapHashBucketSize and serverNamesHashBucketSize.

  configs:
    mapHashBucketSize: 128 # Change this
    serverNamesHashBucketSize: 128 # Change this
DB Schema update hooks

We introduced DB Schema update hooks to streamline the image version update. With these hooks, new changes in liquibase changesets are automatically applied to ensure database schema is on the same version as the application version. These hooks are idempotent and will not change the existing database schema as long as there is no change in schema between upgraded version and the old version.

These hooks are required to create the database schema during installation and when updating the application version. However, if you are not deploying a new application or expecting any schema changes, they can be disabled.

hooks:
  serviceAccount: ""
  idpDbSchemaUpgrade:
    enabled: true # You can disable this
  serverDbSchemaUpgrade:
    enabled: true # You can disable this
Option 2: Managed Cache Services

For production environments, you might prefer using managed cache services. The platform supports various managed services including:

  • Google Cloud Platform: Memorystore for Valkey
  • Amazon Web Services: ElastiCache for Redis
  • Azure: Azure Cache for Redis

Note for Google Cloud Users: When using Memorystore for Valkey, we recommend setting up connectivity using Private Service Connect (PSC) with service connection policies. This is the only supported networking method for Memorystore for Valkey. For detailed setup instructions, refer to the Memorystore for Valkey networking documentation.

To use a managed cache service:

  1. Disable the default Valkey service in values.yaml:
redis:
  enabled: false
  1. Configure the managed cache connection in values.yaml under the API section:
api:
  env:
    - name: MEMCACHE_ENABLE
      value: "true"
    - name: MEMCACHE_HOST
      value: "redis://<username>:<password>@host:port"  # Can be hostname or IP address

Note: If your managed cache service is configured in clustered mode, add the following environment variable:

    - name: MEMCACHE_BACKEND
      value: "redis-cluster"

Deployment Phase

1. Install Core Platform Components

Install the core platform components using Helm:

helm install authlete-platform . -n authlete -f secret-values.yaml

Verify the installation:

# Check pod status
kubectl get pods -n authlete

Expected output:

NAME                       READY   STATUS    RESTARTS   AGE
api-6b78f87847-xxxxx       2/2     Running   0          2m
proxy-6c99bdc94b-xxxxx     1/1     Running   0          2m
redis-5f8f64df5d-xxxxx     1/1     Running   0          2m

Note: Initial deployment may take 5 minutes while images are pulled and databases are initialized.

2. Install Optional Components

The following components are optional based on your requirements:

  • Management Console: Primary interface for Authlete platform configuration
  • IdP Server: OIDC compliant identity provider for user authentication and management To install optional components:
helm upgrade authlete-platform . -f secret-values.yaml -n authlete

Verify the optional components:

# Check new pod status
kubectl get pods -n authlete

Expected output:

NAME                       READY   STATUS    RESTARTS   AGE
console-6b78f87847-xxxxx  1/1     Running   0          2m
idp-6c99bdc94b-xxxxx      2/2     Running   0          2m

3. Configure Load Balancer

The final step is to set up a load balancer service to expose your Authlete deployment:

  1. First, reserve a static external IP address in your cloud provider.

Note: The following commands are GCP-specific. For other cloud providers (AWS, Azure, etc.), please refer to your cloud provider’s documentation for reserving a static IP address. You must reserve a regional static external IP address in GCP. This is required because GKE LoadBalancer services only support IPs allocated in the same region as the cluster.

# GCP-specific commands
# Reserve a static IP address
gcloud compute addresses create authlete-ip --region=us-central1

# Get the reserved IP address
gcloud compute addresses describe authlete-ip --region=us-central1
  1. Create a load balancer service using the reserved IP. Create a file named proxy-lb-service.yaml:
apiVersion: v1
kind: Service
metadata:
  labels:
    app: proxy
  name: proxy-lb
spec:
  externalTrafficPolicy: Local
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: 8443
  selector:
    app: proxy
  sessionAffinity: None
  type: LoadBalancer
  loadBalancerIP: #external_static_ip  # Replace with your reserved static IP
  1. Apply the load balancer configuration:
kubectl apply -f proxy-lb-service.yaml -n authlete
  1. Verify the load balancer is properly configured:
kubectl get service proxy-lb -n authlete

You should see output similar to:

NAME       TYPE           CLUSTER-IP      EXTERNAL-IP          PORT(S)         AGE
proxy-lb   LoadBalancer   10.x.x.x        YOUR_STATIC_IP      443:32xxx/TCP   1m

Once the EXTERNAL-IP shows your static IP (may take a few minutes), your Authlete deployment is accessible via HTTPS on that IP address.

4. Map Domain to Load Balancer

Create DNS records for all three domains pointing to your load balancer IP:

# API Server
api.your-domain.com.     IN  A     YOUR_STATIC_IP

# IdP Server
login.your-domain.com.   IN  A     YOUR_STATIC_IP

# Management Console
console.your-domain.com. IN  A     YOUR_STATIC_IP

Verify the DNS configuration:

# Test DNS configuration
dig +short api.your-domain.com
dig +short login.your-domain.com
dig +short console.your-domain.com

# Test HTTPS endpoints
curl -I https://api.your-domain.com/api/info

If all domains resolve to your load balancer IP and the endpoints are accessible, your Authlete deployment is ready for use.

You can now access the Management Console:

  1. Navigate to https://console.your-domain.com
  2. Log in using the admin credentials specified in your secret-values.yaml
  3. Begin configuring your Authlete services

Note: If you cannot access the console, verify that:

  • DNS records have fully propagated
  • Load balancer health checks are passing
  • TLS certificate is valid for all domains

Helm Chart Changelog

The current Helm chart version is 2.0.1.

September 2025 update

  • Added support for TLS connection with a private Certificate Authority
  • Added missing IP ranges for internal connections
  • Updated map_hash_bucket_size and server_names_hash_bucket_size parameters to accept larger values