Table of Contents
To access Helm charts and container images from the Authlete registry, follow these steps:
kubectl create ns authlete
Use the following command to log in:
helm registry login -u <ORG_ID> -p <TOKEN> artifacts.authlete.com
Replace <ORG_ID> and
Once logged in, you can pull Helm charts from the registry.
Authlete Helm chart is distributed using the OCI format. Use the following command to pull and extract the chart locally:
#Current stable version is 2.1.0
helm pull oci://artifacts.authlete.com/authlete-platform-chart --version 2.1.0 --untar
cd authlete-platform-chart
You have two options for accessing Authlete container images:
Option A: Direct Registry Access (Development/Evaluation)
For development or evaluation environments, you can pull images directly from Authlete’s registry:
# Create a secret for registry authentication
kubectl create secret docker-registry authlete-registry \
-n authlete \
--docker-server=artifacts.authlete.com \
--docker-username=<ORG_ID> \
--docker-password=<TOKEN>
# Configure the default ServiceAccount to use this secret
kubectl patch serviceaccount default \
-n authlete \
-p '{"imagePullSecrets": [{"name": "authlete-registry"}]}'
Option B: Mirror Images (Production Recommended) For production environments, see the Mirror Images section for instructions on mirroring images to your private registry.
For improved reliability and control, we recommend customers mirror Authlete-provided container images to their own container registry. This avoids direct runtime dependency on Authlete’s registry, and ensures reproducible deployments.
| Image | Description | Supported Version Tags |
|---|---|---|
| server | Core API server that handles OAuth 2.0 and OpenID Connect operations | 3.0.19 |
| server-db-schema | Database schema initialization tool for the API server | v3.0.19 |
| idp | Identity Provider server for user authentication and management | 1.0.18 |
| idp-db-schema | Database schema initialization tool for the IdP server | v1.0.18 |
| console | React based management console for platform configuration and monitoring | 1.0.11 |
| nginx | Nginx-based reverse proxy for handling TLS termination and routing | 1.26.3 |
| valkey | Caching service for improved performance and reduced database load | 8.0.1 |
| gce-proxy | Cloud SQL proxy for secure database connections in GCP environments | 1.37.0 |
| authlete-bootstrapper | Initialization service for the platform. Only used during first deployment. | 1.1.0 |
# Authenticate to Authlete registry
docker login artifacts.authlete.com -u <ORG_ID> -p <TOKEN>
# Pull an image
docker pull artifacts.authlete.com/<image>:<tag>
# Tag and push to your own registry
docker tag artifacts.authlete.com/<image>:<tag> registry.mycompany.com/<image>:<tag>
docker push registry.mycompany.com/<image>:<tag>
Update your values.yaml to use the mirrored image paths before running the installation.
Altnernatively, if you can use crane if you want to directly push the images to your own registry.
# Set your target registry base
TARGET_REGISTRY="ghcr.io/your-org-name"
# Image copy commands
crane cp artifacts.authlete.com/server:3.0.19 $TARGET_REGISTRY/server:3.0.19
crane cp artifacts.authlete.com/server-db-schema:v3.0.19 $TARGET_REGISTRY/server-db-schema:v3.0.19
crane cp artifacts.authlete.com/idp:1.0.18 $TARGET_REGISTRY/idp:1.0.18
crane cp artifacts.authlete.com/idp-db-schema:v1.0.18 $TARGET_REGISTRY/idp-db-schema:v1.0.18
crane cp artifacts.authlete.com/console:1.0.11 $TARGET_REGISTRY/console:1.0.11
crane cp artifacts.authlete.com/nginx:1.26.3 $TARGET_REGISTRY/nginx:1.26.3
crane cp artifacts.authlete.com/valkey:8.0.1 $TARGET_REGISTRY/valkey:8.0.1
crane cp artifacts.authlete.com/gce-proxy:1.37.0 $TARGET_REGISTRY/gce-proxy:1.37.0
crane cp artifacts.authlete.com/authlete-bootstrapper:1.1.0 $TARGET_REGISTRY/authlete-bootstrapper:1.1.0
The default values.yaml is already bundled inside the chart. You can inspect or modify it for custom configurations.
Update the global.repo to your own registry.
global:
id: "authlete-platform"
repo: "registry.your-company.com" # Required: Your container registry
domains section with your domain names: # Required: These domains must be accessible from your users
api: "api.your-domain.com" # API server
idp: "login.your-domain.com" # IdP server
console: "console.your-domain.com" # Management console
kubernetes.io/tls in the authlete namespace before installation. The certificate should cover all domains used by the platform (e.g. api.example.com, login.example.com, console.example.com). You can use a wildcard or a SAN certificate.kubectl create secret tls proxy-certs \
--cert=./tls.crt \
--key=./tls.key \
-n authlete
Note: If you want to manage proxy-certs through External Secret manager and skip this step, refer to the ‘External Secret Manager’ section below
TLS encryption can be enabled with simple configuration changes.
cache:
api:
enabled: true
auth:
enabled: false
connection:
tls: true
host: redis
port: 6379
idp:
enabled: true
auth:
enabled: false
connection:
tls: true
host: redis
port: 6379
Database: TLS connection to the database can be enabled with sslMode property:
database:
idp: # IdP Server Database
name: idp
host: localhost
connectionParams:
allowPublicKeyRetrieval: true
sslMode: verify-ca # Configure the sslMode here.
api: # API Server Database
name: server
host: localhost
connectionParams:
allowPublicKeyRetrieval: true
sslMode: verify-ca # Configure the sslMode here.
Note: When using Cloud SQL Proxy, sslMode should be set to
disablebecause the connection is already encrypted by the proxy. Enabling an additional layer of TLS will not work.
Note: If the certificate presented by Redis or Database over TLS is issued by a private CA, you should create a custom bundle containing all certificates and load the bundle into the applications. Please refer to the Private CA section for more details.
To use TLS with private CA certificate, a custom bundle containing all necessary certificates needs to be prepared outside of the Helm Chart. Next, we will create a ConfigMap that contains the bundles, which satisfies the following contract:
ca-bundle.p12 and ca-bundle.crtWe recommend extracting the public CA bundle as described in the trust-manager documentation. Below, we explain how to create a ConfigMap with the bundle.
Trust manager makes it easy to manage trust bundles in Kubernetes. For installation instructions please consult trust-manager’s official documentation. Once trust-manager is installed, you will need to create a single bundle object. The ConfigMap containting all necessary certificates will be then generated automatically.
Here is an example Bundle from which trust-manager will assemble the final bundle. This Bundle’s manifest file name is assumed to be bundle-creation.yaml
apiVersion: trust.cert-manager.io/v1alpha1
kind: Bundle
metadata:
name: custom-ca-bundle
spec:
sources:
- useDefaultCAs: true
# A manually specified PEM-encoded cert, included directly into the Bundle
- inLine: |
-----BEGIN CERTIFICATE-----
MIIC5zCCAc+gAwIBAgIBADANBgkqhkiG9w0BAQsFADAVMRMwEQYDVQQDEwprdWJl
...
... contents of proxy's CA certificate ...
-----END CERTIFICATE-----
- inLine: |
-----BEGIN CERTIFICATE-----
... contents of mysql's CA certificate ...
-----END CERTIFICATE-----
- inLine: |
-----BEGIN CERTIFICATE-----
... contents of valkey's CA certificate ...
-----END CERTIFICATE-----
target:
# All ConfigMaps will include a PEM-formatted bundle, here named "root-certs.pem"
# and in this case we also request binary formatted bundles in PKCS#12 format,
# here named "bundle.p12".
configMap:
key: "ca-bundle.crt"
additionalFormats:
pkcs12:
key: "ca-bundle.p12"
namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: "authlete" # Deployment namespace
Finally, create the Bundle with
kubectl create -f bundle-creation.yaml -n authlete
Note: Please note that if you’re using an official cert-manager-provided Debian trust package, you should check regularly to ensure the trust package is kept up to date.
The custom Bundle can created in a few simple steps by using docker commands.
Note: The tutorial below demonstrates sample code only. Customers are responsible for creating, storing, and updating bundles securely.
gen-bundles.sh# Input/output directories
SRC="/certs" # Location to place crt files
DST="/usr/local/share/ca-certificates/custom" # Directory monitored by update-ca-certificates
OUT="/bundle" # Output directory for the bundle
# Clean up output directory
rm -f $OUT/*
# Create output directories
mkdir -p "$DST"
# Install required packages (ca-certificates-java is necessary for generating the Java truststore)
apt-get -y update
apt-cache madison ca-certificates
DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends openssl default-jre-headless ca-certificates-java "ca-certificates=20230311+deb12u1"
# 1) Place additional certificates (*.pem / *.crt placed as .crt)
cp -v ${SRC}/* ${DST}/
# If you want to unify the extension so that .pem files are also treated as .crt, enable the following
# for f in "$DST"/*.pem; do mv -v "$f" "${f%.pem}.crt"; done
# 2) Update system & Java truststore
update-ca-certificates -f
# 3) Output (PEM and PKCS12)
cp -v /etc/ssl/certs/ca-certificates.crt "$OUT/ca-bundle.crt"
keytool -importkeystore -srckeystore /etc/ssl/certs/java/cacerts -destkeystore $OUT/ca-bundle.p12 -deststoretype PKCS12 -srcstorepass changeit -deststorepass changeit
certs directory with .crt extension. ./certs/
├── sql-ca.crt
├── redis-ca.crt
└── ...
docker run --rm --user root -v "$PWD/certs:/certs:ro" -v "$PWD/out:/bundle" -v "$PWD/gen-bundles.sh:/usr/local/bin/gen-bundles.sh:ro" --entrypoint bash docker.io/library/debian:12-slim -c "sh /usr/local/bin/gen-bundles.sh"
ca-bundle.p12 and ca-bundle.crt files, you can create the kubernetes ConfigMap withkubectl create cm custom-ca-bundle --from-file=ca-bundle.p12 --from-file=ca-bundle.crt
Note: : The current debian base image used for the trust-manager’s bundle creation is docker.io/library/debian:12-slim and ca-certificates target version is at
20230311+deb12u1.0
Now the bundle is created and is ready for use, we will need to configure one more properties to make the bundles available for applications. Set .Values.global.tls.caBundle.enabled to “true” so that the bundle is mounted into the application pods.
global:
tls:
caBundle:
enabled: true # Enable this
type: ConfigMap
name: custom-ca-bundle
trustStore:
password: "changeit"
If you created a new bundle and replaced the old bundle with it, you should restart IdP, Authlete Server and proxy pods(applications). On a restart, applications will load the updated bundle.
The platform requires two databases: one for the API server and one for the IdP server. Configure the connection details in secret-values.yaml:
Note: For MySQL 8.0+, ensure your databases are configured with:
- Character set:
utf8mb4- Collation:
utf8mb4_0900_ai_ci
secret-values.yaml file is also included in the chart archive. Modify secret-values.yaml with your database and Authlete admin credentials.database:
idp: # IdP Server Database
user: authlete # Database user
password: !raw ***** # User password
api: # API Server Database
user: authlete
password: !raw ******
idp:
auth:
adminUser:
email: "admin@authlete.com"
password: !raw ******
encryptionSecret: ********
For GCP Cloud SQL:
cloudSql:
enabled: true
image: gce-proxy:1.37.0
instance: project:region:instance # Your Cloud SQL instance
port: 3306
For other cloud providers, disable Cloud SQL proxy and use direct connection:
cloudSql:
enabled: false
Note: For AWS-EKS if you are using DNS name for your MySQL Database with or without RDS-Proxy:
- with RDS-Proxy:
host: sample-db.proxy-stu901vwx234.us-east-1.rds.amazonaws.com- without RDS-Proxy:
host: sample-db.cluster-stu901vwx234.us-east-1.rds.amazonaws.com
The platform supports two caching configuration options:
For production environments, you might prefer using managed cache services. The platform supports various managed services including:
When using External Secret Manager, make sure the secret’s needed for the helm deployment are pre-configured and present on the cluster. Any absence of the below mentioned secrets will result in the deployment/upgrade failures.
Secrets needed are authlete-credentials and proxy-certs (only if you want ExternalSecretManager to manage proxy-certs)
authlete-secret in your External Secret Manager with the following structure:{
"database": {
"idp": {
"user": "root",
"password": "<placeholder-idp-db-password>"
},
"api": {
"user": "root",
"password": "<placeholder-api-db-password>"
}
},
"cache": {
"api": {
"auth": {
"user": "default",
"password": "<placeholder-api-cache-password>"
}
},
"idp": {
"auth": {
"user": "default",
"password": "<placeholder-idp-cache-password>"
}
}
},
"idp": {
"adminUser": {
"email": "admin@authlete.com",
"password": "mypassword"
},
"encryptionSecret": "<placeholder-random-secret>"
}
}
replace <ClusterSecretStore-name> with the correct ClusterSecretStore name in your cluster.
kubectl apply -f - <<EOF
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: authlete-secret
namespace: authlete
spec:
refreshInterval: 30s
secretStoreRef:
kind: ClusterSecretStore
name: <ClusterSecretStore-name>
target:
name: authlete-credentials
creationPolicy: Owner
data:
# IDP Database
- secretKey: authlete-idp-db-user
remoteRef:
key: authlete-secret
property: database.idp.user
- secretKey: authlete-idp-db-password
remoteRef:
key: authlete-secret
property: database.idp.password
# API Database
- secretKey: authlete-api-db-user
remoteRef:
key: authlete-secret
property: database.api.user
- secretKey: authlete-api-db-password
remoteRef:
key: authlete-secret
property: database.api.password
# IDP Admin
- secretKey: authlete-idp-admin-email
remoteRef:
key: authlete-secret
property: idp.adminUser.email
- secretKey: authlete-idp-admin-password
remoteRef:
key: authlete-secret
property: idp.adminUser.password
- secretKey: authlete-idp-encryption-secret
remoteRef:
key: authlete-secret
property: idp.encryptionSecret
# Only needed if cache is enabled
- secretKey: authlete-api-memcache-auth-user
remoteRef:
key: authlete-secret
property: cache.api.auth.user
- secretKey: authlete-api-memcache-auth-password
remoteRef:
key: authlete-secret
property: cache.api.auth.password
- secretKey: authlete-idp-memcache-auth-user
remoteRef:
key: authlete-secret
property: cache.idp.auth.user
- secretKey: authlete-idp-memcache-auth-password
remoteRef:
key: authlete-secret
property: cache.idp.auth.password
EOF
authlete-proxy-secretWhen storing certificates in the external secret manager, ensure the JSON keys are exactly:
- `tls.crt` for the certificate chain
- `tls.key` for the private key
Example format:
{
"tls.crt": "-----BEGIN CERTIFICATE-----\nMIIE...server-cert-data...==\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\nMIIF...intermediate-cert...==\n-----END CERTIFICATE-----",
"tls.key": "-----BEGIN PRIVATE KEY-----\nMIIE...private-key-data...==\n-----END PRIVATE KEY-----"
}
Moving TLS certificates to the External Secret Manager is optional, else follow the deployment guide to configure TLS certificates. Installation Steps -> Configuration Phase -> Configure TLS Certificates
replace <ClusterSecretStore-name> with the correct ClusterSecretStore name in your cluster.
kubectl apply -f - <<EOF
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: authlete-proxy-secret
namespace: authlete
spec:
refreshInterval: 30s
secretStoreRef:
kind: ClusterSecretStore
name: <ClusterSecretStore-name>
target:
name: proxy-certs
creationPolicy: Owner
data:
# TLS Certificates
- secretKey: tls.crt
remoteRef:
key: authlete-proxy-secret
property: tls.crt
- secretKey: tls.key
remoteRef:
key: authlete-proxy-secret
property: tls.key
EOF
Note: To enable the externalSecrets feature in the Helm chart, refer to the values below inside values.yaml
externalSecrets:
enabled: true # Set to true to enable ESM | default 'false'
If your domain name is very long, you might hit a limit on map_hash_max_size parameter. You can work around this problem by increasing the values of mapHashBucketSize and serverNamesHashBucketSize.
configs:
mapHashBucketSize: 128 # Change this
serverNamesHashBucketSize: 128 # Change this
We introduced DB Schema update hooks to streamline the image version update. With these hooks, new changes in liquibase changesets are automatically applied to ensure database schema is on the same version as the application version. These hooks are idempotent and will not change the existing database schema as long as there is no change in schema between upgraded version and the old version.
These hooks are required to create the database schema during installation and when updating the application version. However, if you are not deploying a new application or expecting any schema changes, they can be disabled.
hooks:
serviceAccount: ""
idpDbSchemaUpgrade:
enabled: true # You can disable this
serverDbSchemaUpgrade:
enabled: true # You can disable this
Note for Google Cloud Users: When using Memorystore for Valkey, we recommend setting up connectivity using Private Service Connect (PSC) with service connection policies. This is the only supported networking method for Memorystore for Valkey. For detailed setup instructions, refer to the Memorystore for Valkey networking documentation.
To use a managed cache service, update your values.yaml file with the following information:
cache:
api:
enabled: true
auth:
enabled: false
connection:
tls: false
host: redis
port: 6379
idp:
enabled: true
auth:
enabled: false
connection:
tls: false
host: redis
port: 6379
Note: If your managed cache service is configured in clustered mode, add the following environment variable:
- name: MEMCACHE_BACKEND
value: "redis-cluster"
Install the core platform components using Helm:
helm install authlete-platform . -n authlete -f secret-values.yaml
helm install authlete-platform . -n authlete
Verify the installation:
# Check pod status
kubectl get pods -n authlete
Expected output:
NAME READY STATUS RESTARTS AGE
api-6b78f87847-xxxxx 2/2 Running 0 2m
proxy-6c99bdc94b-xxxxx 1/1 Running 0 2m
Note: Initial deployment may take 5 minutes while images are pulled and databases are initialized.
You may also install the following optional components based on your requirements:
helm upgrade authlete-platform . -n authlete -f secret-values.yaml
helm upgrade authlete-platform . -n authlete
Verify the optional components:
# Check new pod status
kubectl get pods -n authlete
Expected output:
NAME READY STATUS RESTARTS AGE
console-6b78f87847-xxxxx 1/1 Running 0 2m
idp-6c99bdc94b-xxxxx 2/2 Running 0 2m
The final step is to set up a load balancer service to expose your Authlete deployment:
Note: The following commands are GCP-specific. For other cloud providers (AWS, Azure, etc.), please refer to your cloud provider’s documentation for reserving a static IP address. You must reserve a regional static external IP address in GCP. This is required because GKE LoadBalancer services only support IPs allocated in the same region as the cluster.
# GCP-specific commands
# Reserve a static IP address
gcloud compute addresses create authlete-ip --region=us-central1
# Get the reserved IP address
gcloud compute addresses describe authlete-ip --region=us-central1
proxy-lb-service.yaml:
This is a Good Example for GKE deploymentapiVersion: v1
kind: Service
metadata:
labels:
app: proxy
name: proxy-lb
spec:
externalTrafficPolicy: Local
ports:
- name: https
port: 443
protocol: TCP
targetPort: 8443
selector:
app: proxy
sessionAffinity: None
type: LoadBalancer
loadBalancerIP: #external_static_ip # Replace with your reserved static IP
This is a Good Example for AWS-EKS deployment
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-eip-allocations: eipalloc-xxxxxxxxxxxxxxxxx # Replace with your EIP allocation ID
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
service.beta.kubernetes.io/aws-load-balancer-subnets: subnet-xxxxxxxxx # Replace with your subnet ID
service.beta.kubernetes.io/aws-load-balancer-type: external
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
labels:
app: proxy
name: proxy-lb
spec:
externalTrafficPolicy: Local
ports:
- name: https
port: 443
protocol: TCP
targetPort: 8443
selector:
app: proxy
sessionAffinity: None
type: LoadBalancer
kubectl apply -f proxy-lb-service.yaml -n authlete
kubectl get service proxy-lb -n authlete
You should see output similar to:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
proxy-lb LoadBalancer 10.x.x.x YOUR_STATIC_IP 443:32xxx/TCP 1m
Once the EXTERNAL-IP shows your static IP (may take a few minutes), your Authlete deployment is accessible via HTTPS on that IP address.
Create DNS records for all three domains pointing to your load balancer IP:
# API Server
api.your-domain.com. IN A YOUR_STATIC_IP
# IdP Server
login.your-domain.com. IN A YOUR_STATIC_IP
# Management Console
console.your-domain.com. IN A YOUR_STATIC_IP
Verify the DNS configuration:
# Test DNS configuration
dig +short api.your-domain.com
dig +short login.your-domain.com
dig +short console.your-domain.com
# Test HTTPS endpoints
curl -I https://api.your-domain.com/api/info
If all domains resolve to your load balancer IP and the endpoints are accessible, your Authlete deployment is ready for use.
You can now access the Management Console:
https://console.your-domain.comsecret-values.yamlNote: If you cannot access the console, verify that:
The current Helm chart version is 2.1.0.
map_hash_bucket_size and server_names_hash_bucket_size parameters to accept larger values