Table of Contents
To access Helm charts and container images from the Authlete registry, follow these steps:
kubectl create ns authlete
Use the following command to log in:
helm registry login -u <ORG_ID> -p <TOKEN> artifacts.authlete.com
Replace <ORG_ID> and
Once logged in, you can pull Helm charts from the registry.
Authlete Helm chart is distributed using the OCI format. Use the following command to pull and extract the chart locally:
#Current stable version is 2.0.0
helm pull oci://artifacts.authlete.com/authlete-platform-chart --version 2.0.1 --untar
cd authlete-platform-chart
You have two options for accessing Authlete container images:
Option A: Direct Registry Access (Development/Evaluation)
For development or evaluation environments, you can pull images directly from Authlete’s registry:
# Create a secret for registry authentication
kubectl create secret docker-registry authlete-registry \
-n authlete \
--docker-server=artifacts.authlete.com \
--docker-username=<ORG_ID> \
--docker-password=<TOKEN>
# Configure the default ServiceAccount to use this secret
kubectl patch serviceaccount default \
-n authlete \
-p '{"imagePullSecrets": [{"name": "authlete-registry"}]}'
Option B: Mirror Images (Production Recommended) For production environments, see the Mirror Images section for instructions on mirroring images to your private registry.
For improved reliability and control, we recommend customers mirror Authlete-provided container images to their own container registry. This avoids direct runtime dependency on Authlete’s registry, and ensures reproducible deployments.
Image | Description | Supported Version Tags |
---|---|---|
server | Core API server that handles OAuth 2.0 and OpenID Connect operations | 3.0.19 |
server-db-schema | Database schema initialization tool for the API server | v3.0.19 |
idp | Identity Provider server for user authentication and management | 1.0.18 |
idp-db-schema | Database schema initialization tool for the IdP server | v1.0.18 |
console | React based management console for platform configuration and monitoring | 1.0.11 |
nginx | Nginx-based reverse proxy for handling TLS termination and routing | 1.26.3 |
valkey | Caching service for improved performance and reduced database load | 8.0.1 |
gce-proxy | Cloud SQL proxy for secure database connections in GCP environments | 1.37.0 |
authlete-bootstrapper | Initialization service for the platform. Only used during first deployment. | 1.1.0 |
# Authenticate to Authlete registry
docker login artifacts.authlete.com -u <ORG_ID> -p <TOKEN>
# Pull an image
docker pull artifacts.authlete.com/<image>:<tag>
# Tag and push to your own registry
docker tag artifacts.authlete.com/<image>:<tag> registry.mycompany.com/<image>:<tag>
docker push registry.mycompany.com/<image>:<tag>
Update your values.yaml
to use the mirrored image paths before running the installation.
Altnernatively, if you can use crane if you want to directly push the images to your own registry.
# Set your target registry base
TARGET_REGISTRY="ghcr.io/your-org-name"
# Image copy commands
crane cp artifacts.authlete.com/server:3.0.19 $TARGET_REGISTRY/server:3.0.19
crane cp artifacts.authlete.com/server-db-schema:v3.0.19 $TARGET_REGISTRY/server-db-schema:v3.0.19
crane cp artifacts.authlete.com/idp:1.0.18 $TARGET_REGISTRY/idp:1.0.18
crane cp artifacts.authlete.com/idp-db-schema:v1.0.18 $TARGET_REGISTRY/idp-db-schema:v1.0.18
crane cp artifacts.authlete.com/console:1.0.11 $TARGET_REGISTRY/console:1.0.11
crane cp artifacts.authlete.com/nginx:1.26.3 $TARGET_REGISTRY/nginx:1.26.3
crane cp artifacts.authlete.com/valkey:8.0.1 $TARGET_REGISTRY/valkey:8.0.1
crane cp artifacts.authlete.com/gce-proxy:1.37.0 $TARGET_REGISTRY/gce-proxy:1.37.0
crane cp artifacts.authlete.com/authlete-bootstrapper:1.1.0 $TARGET_REGISTRY/authlete-bootstrapper:1.1.0
The default values.yaml
is already bundled inside the chart. You can inspect or modify it for custom configurations.
Update the global.repo
to your own registry.
global:
id: "authlete-platform"
repo: "registry.your-company.com" # Required: Your container registry
domains
section with your domain names: # Required: These domains must be accessible from your users
api: "api.your-domain.com" # API server
idp: "login.your-domain.com" # IdP server
console: "console.your-domain.com" # Management console
kubernetes.io/tls
in the authlete namespace before installation. The certificate should cover all domains used by the platform (e.g. api.example.com, login.example.com, console.example.com). You can use a wildcard or a SAN certificate.kubectl create secret tls proxy-certs \
--cert=./tls.crt \
--key=./tls.key \
-n authlete
TLS encryption can be enabled with simple configuration changes.
Redis: If Redis connection requires TLS encryption, configure rediss://
in the MEMCACHE_HOST value:
api:
env:
- name: MEMCACHE_ENABLE
value: "true"
- name: MEMCACHE_HOST
value: "rediss://<username>:<password>@host:port" # Can be hostname or IP address
...
For IdP deployment:
idp:
env:
- name: JAVA_OPTS
value: "-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8008"
- name: AUTHLETE_IDP_REMOTE_CACHE_ENDPOINT
value: "rediss://<username>:<password>@host:port" # Can be hostname or IP address
- name: CLUSTER_NAME
...
Database: TLS connection to the database can be enabled with sslMode
property:
database:
idp: # IdP Server Database
name: idp
user: authlete
password: !raw *****
host: localhost
connectionParams:
allowPublicKeyRetrieval: true
sslMode: verify-ca # Configure the sslMode here.
api: # API Server Database
name: server
user: authlete
password: !raw ******
host: localhost
connectionParams:
allowPublicKeyRetrieval: true
sslMode: verify-ca # Configure the sslMode here.
Note: When using Cloud SQL Proxy, sslMode should be set to
disable
because the connection is already encrypted by the proxy. Enabling an additional layer of TLS will not work.
Note: If the certificate presented by Redis or Database over TLS is issued by a private CA, you should create a custom bundle containing all certificates and load the bundle into the applications. Please refer to the Private CA section for more details.
To use TLS with private CA certificate, a custom bundle containing all necessary certificates needs to be prepared outside of the Helm Chart. Next, we will create a ConfigMap that contains the bundles, which satisfies the following contract:
ca-bundle.p12
and ca-bundle.crt
We recommend extracting the public CA bundle as described in the trust-manager documentation. Below, we explain how to create a ConfigMap with the bundle.
Trust manager makes it easy to manage trust bundles in Kubernetes. For installation instructions please consult trust-manager’s official documentation. Once trust-manager is installed, you will need to create a single bundle object. The ConfigMap containting all necessary certificates will be then generated automatically.
Here is an example Bundle
from which trust-manager will assemble the final bundle. This Bundle’s manifest file name is assumed to be bundle-creation.yaml
apiVersion: trust.cert-manager.io/v1alpha1
kind: Bundle
metadata:
name: custom-ca-bundle
spec:
sources:
- useDefaultCAs: true
# A manually specified PEM-encoded cert, included directly into the Bundle
- inLine: |
-----BEGIN CERTIFICATE-----
MIIC5zCCAc+gAwIBAgIBADANBgkqhkiG9w0BAQsFADAVMRMwEQYDVQQDEwprdWJl
...
... contents of proxy's CA certificate ...
-----END CERTIFICATE-----
- inLine: |
-----BEGIN CERTIFICATE-----
... contents of mysql's CA certificate ...
-----END CERTIFICATE-----
- inLine: |
-----BEGIN CERTIFICATE-----
... contents of valkey's CA certificate ...
-----END CERTIFICATE-----
target:
# All ConfigMaps will include a PEM-formatted bundle, here named "root-certs.pem"
# and in this case we also request binary formatted bundles in PKCS#12 format,
# here named "bundle.p12".
configMap:
key: "ca-bundle.crt"
additionalFormats:
pkcs12:
key: "ca-bundle.p12"
namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: "authlete" # Deployment namespace
Finally, create the Bundle with
kubectl create -f bundle-creation.yaml -n authlete
Note: Please note that if you’re using an official cert-manager-provided Debian trust package, you should check regularly to ensure the trust package is kept up to date.
The custom Bundle can created in a few simple steps by using docker commands.
Note: The tutorial below demonstrates sample code only. Customers are responsible for creating, storing, and updating bundles securely.
gen-bundles.sh
# Input/output directories
SRC="/certs" # Location to place crt files
DST="/usr/local/share/ca-certificates/custom" # Directory monitored by update-ca-certificates
OUT="/bundle" # Output directory for the bundle
# Clean up output directory
rm -f $OUT/*
# Create output directories
mkdir -p "$DST"
# Install required packages (ca-certificates-java is necessary for generating the Java truststore)
apt-get -y update
apt-cache madison ca-certificates
DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends openssl default-jre-headless ca-certificates-java "ca-certificates=20230311+deb12u1"
# 1) Place additional certificates (*.pem / *.crt placed as .crt)
cp -v ${SRC}/* ${DST}/
# If you want to unify the extension so that .pem files are also treated as .crt, enable the following
# for f in "$DST"/*.pem; do mv -v "$f" "${f%.pem}.crt"; done
# 2) Update system & Java truststore
update-ca-certificates -f
# 3) Output (PEM and PKCS12)
cp -v /etc/ssl/certs/ca-certificates.crt "$OUT/ca-bundle.crt"
keytool -importkeystore -srckeystore /etc/ssl/certs/java/cacerts -destkeystore $OUT/ca-bundle.p12 -deststoretype PKCS12 -srcstorepass changeit -deststorepass changeit
certs
directory with .crt
extension. ./certs/
├── sql-ca.crt
├── redis-ca.crt
└── ...
docker run --rm --user root -v "$PWD/certs:/certs:ro" -v "$PWD/out:/bundle" -v "$PWD/gen-bundles.sh:/usr/local/bin/gen-bundles.sh:ro" --entrypoint bash docker.io/library/debian:12-slim -c "sh /usr/local/bin/gen-bundles.sh"
ca-bundle.p12
and ca-bundle.crt
files, you can create the kubernetes ConfigMap withkubectl create cm custom-ca-bundle --from-file=ca-bundle.p12 --from-file=ca-bundle.crt
Note: : The current debian base image used for the trust-manager’s bundle creation is docker.io/library/debian:12-slim and ca-certificates target version is at
20230311+deb12u1.0
Now the bundle is created and is ready for use, we will need to configure one more properties to make the bundles available for applications. Set .Values.global.tls.caBundle.enabled
to “true” so that the bundle is mounted into the application pods.
global:
tls:
caBundle:
enabled: true # Enable this
type: ConfigMap
name: custom-ca-bundle
trustStore:
password: "changeit"
If you created a new bundle and replaced the old bundle with it, you should restart IdP, Authlete Server and proxy pods(applications). On a restart, applications will load the updated bundle.
The platform requires two databases: one for the API server and one for the IdP server. Configure the connection details in secret-values.yaml
:
Note: For MySQL 8.0+, ensure your databases are configured with:
- Character set:
utf8mb4
- Collation:
utf8mb4_0900_ai_ci
secret-values.yaml
file is also included in the chart archive. Modify secret-values.yaml
with your database and Authlete admin credentials.database:
idp: # IdP Server Database
name: idp # Database name
user: authlete # Database user
password: !raw ***** # User password
host: localhost # Database host
connectionParams:
allowPublicKeyRetrieval: true
sslMode: disable # disable|trust|verify-ca|verify-full
api: # API Server Database
name: server
user: authlete
password: !raw ******
host: localhost
connectionParams:
allowPublicKeyRetrieval: true
sslMode: disable # disable|trust|verify-ca|verify-full
idp:
auth:
adminUser:
email: "admin@authlete.com"
password: !raw ******
encryptionSecret: ********
For GCP Cloud SQL:
cloudSql:
enabled: true
image: gce-proxy:1.37.0
instance: project:region:instance # Your Cloud SQL instance
port: 3306
For other cloud providers, disable Cloud SQL proxy and use direct connection:
cloudSql:
enabled: false
The platform supports two caching configuration options:
By default, the platform deploys a Valkey pod as the caching solution. This configuration is ready to use out of the box and is defined in values.yaml
:
redis:
name: redis
enabled: true
image: valkey:8.0.1
# -- Resources requirements
resources:
requests:
cpu: 10m
memory: 440M
limits:
cpu: 1
memory: 440M
If your domain name is very long, you might hit a limit on map_hash_max_size
parameter. You can work around this problem by increasing the values of mapHashBucketSize
and serverNamesHashBucketSize
.
configs:
mapHashBucketSize: 128 # Change this
serverNamesHashBucketSize: 128 # Change this
We introduced DB Schema update hooks to streamline the image version update. With these hooks, new changes in liquibase changesets are automatically applied to ensure database schema is on the same version as the application version. These hooks are idempotent and will not change the existing database schema as long as there is no change in schema between upgraded version and the old version.
These hooks are required to create the database schema during installation and when updating the application version. However, if you are not deploying a new application or expecting any schema changes, they can be disabled.
hooks:
serviceAccount: ""
idpDbSchemaUpgrade:
enabled: true # You can disable this
serverDbSchemaUpgrade:
enabled: true # You can disable this
For production environments, you might prefer using managed cache services. The platform supports various managed services including:
Note for Google Cloud Users: When using Memorystore for Valkey, we recommend setting up connectivity using Private Service Connect (PSC) with service connection policies. This is the only supported networking method for Memorystore for Valkey. For detailed setup instructions, refer to the Memorystore for Valkey networking documentation.
To use a managed cache service:
values.yaml
:redis:
enabled: false
values.yaml
under the API section:api:
env:
- name: MEMCACHE_ENABLE
value: "true"
- name: MEMCACHE_HOST
value: "redis://<username>:<password>@host:port" # Can be hostname or IP address
Note: If your managed cache service is configured in clustered mode, add the following environment variable:
- name: MEMCACHE_BACKEND
value: "redis-cluster"
Install the core platform components using Helm:
helm install authlete-platform . -n authlete -f secret-values.yaml
Verify the installation:
# Check pod status
kubectl get pods -n authlete
Expected output:
NAME READY STATUS RESTARTS AGE
api-6b78f87847-xxxxx 2/2 Running 0 2m
proxy-6c99bdc94b-xxxxx 1/1 Running 0 2m
redis-5f8f64df5d-xxxxx 1/1 Running 0 2m
Note: Initial deployment may take 5 minutes while images are pulled and databases are initialized.
The following components are optional based on your requirements:
helm upgrade authlete-platform . -f secret-values.yaml -n authlete
Verify the optional components:
# Check new pod status
kubectl get pods -n authlete
Expected output:
NAME READY STATUS RESTARTS AGE
console-6b78f87847-xxxxx 1/1 Running 0 2m
idp-6c99bdc94b-xxxxx 2/2 Running 0 2m
The final step is to set up a load balancer service to expose your Authlete deployment:
Note: The following commands are GCP-specific. For other cloud providers (AWS, Azure, etc.), please refer to your cloud provider’s documentation for reserving a static IP address. You must reserve a regional static external IP address in GCP. This is required because GKE LoadBalancer services only support IPs allocated in the same region as the cluster.
# GCP-specific commands
# Reserve a static IP address
gcloud compute addresses create authlete-ip --region=us-central1
# Get the reserved IP address
gcloud compute addresses describe authlete-ip --region=us-central1
proxy-lb-service.yaml
:apiVersion: v1
kind: Service
metadata:
labels:
app: proxy
name: proxy-lb
spec:
externalTrafficPolicy: Local
ports:
- name: https
port: 443
protocol: TCP
targetPort: 8443
selector:
app: proxy
sessionAffinity: None
type: LoadBalancer
loadBalancerIP: #external_static_ip # Replace with your reserved static IP
kubectl apply -f proxy-lb-service.yaml -n authlete
kubectl get service proxy-lb -n authlete
You should see output similar to:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
proxy-lb LoadBalancer 10.x.x.x YOUR_STATIC_IP 443:32xxx/TCP 1m
Once the EXTERNAL-IP shows your static IP (may take a few minutes), your Authlete deployment is accessible via HTTPS on that IP address.
Create DNS records for all three domains pointing to your load balancer IP:
# API Server
api.your-domain.com. IN A YOUR_STATIC_IP
# IdP Server
login.your-domain.com. IN A YOUR_STATIC_IP
# Management Console
console.your-domain.com. IN A YOUR_STATIC_IP
Verify the DNS configuration:
# Test DNS configuration
dig +short api.your-domain.com
dig +short login.your-domain.com
dig +short console.your-domain.com
# Test HTTPS endpoints
curl -I https://api.your-domain.com/api/info
If all domains resolve to your load balancer IP and the endpoints are accessible, your Authlete deployment is ready for use.
You can now access the Management Console:
https://console.your-domain.com
secret-values.yaml
Note: If you cannot access the console, verify that:
The current Helm chart version is 2.0.1
.
map_hash_bucket_size
and server_names_hash_bucket_size
parameters to accept larger values