Table of Contents
The purpose of this document is to provide a clear and concise guidance to customers using AWS ECS, who would like to deploy the self-hosted version of Authlete through the Ansible solution.
Setup Ansible execution environment.
Install boto3
and botocore
for AWS interactions. Use the following command:
pip3 install boto3 botocore
You will also need the Python requests module
which you can install with the following command:
pip3 install requests
To install ansible
, run the following: Ansible version (>=11.5.0) is recommended.
pip3 install "ansible>=11.5.0"
Option 1: Install the latest Ansible collection: community.aws
and amazon.aws
dependency:
ansible-galaxy collection install --force community.aws:==9.2.0
ansible-galaxy collection install --force amazon.aws:==9.4.0
Option 2: Create and add the community.aws
collection and amazon.aws
dependency to a requirements.yml file:
collections:
- name: community.aws
version: 9.2.0
- name: amazon.aws
version: 9.4.0
Then, install the requirements:
ansible-galaxy collection install -r requirements.yml
Before anything, you will need to set up your AWS account. Contact your AWS administrator and have them create credentials for you.
Install AWS CLI
Use the following guide to install or update to the latest version of AWS CLI. Skip this step if your AWS CLI environment is already up to date.
AWS Configuration
If you do not have a pre-existing folder, create ~/.aws
.
Once done, create a ~/.aws/credentials
file containing the following information :
[default]
aws_access_key_id = <ACCESS_KEY>
aws_secret_access_key = <SECRET_ACCESS_KEY>
You can find these keys in your AWS IAM
account settings.
Then, create another ~/.aws/config
file containing the following information :
[default]
region = <REGION>
output = json
This corresponds to the region under which you have created your ECS cluster.
First, log into the artifact registry using your organization identifiers :
docker login artifacts.authlete.com -u <org_id> -p <org_token>
Then, pull the Ansible playbook from the artifact registry :
crane pull artifacts.authlete.com/authlete-platform-playbook:1.0.0 ansible-playbook.tar.gz
Finally, extract the files on your system :
tar -xzf ansible-playbook.tar.gz
Create configuration files from sample files for further editing.
cd ansible-playbook
cp -a group_vars/all.yml.sample group_vars/all.yml
cp -a group_vars/aws_ecs.yml.sample group_vars/aws_ecs.yml
You can check the group_vars/aws_ecs.yml
file provided by Ansible for configuration details.
This file contains all the required keys with dummy placeholder values that need to be replaced with your AWS resource identifiers.
ecs_cluster_name
execution_role_arn
efs_filesystem_id
efs_access_point_id
efs_access_point_arn
aws_logs_groups
cloudmap_registry_arns
The following steps will show you how to configure the values in group_vars/aws_ecs.yml
file.
Once your AWS account is set up, create a Fargate ECS cluster
with a name of your choosing.
Use the following aws cli
command to display your ECS cluster information:
aws ecs describe-clusters --clusters <CLUSTER_NAME>
In the group_vars/aws_ecs.yml
file, you can now replace the following AWS resource identifiers with your AWS ECS values:
# AWS account identifier
aws_account_id: "123455673100"
# Region in which ECS resources will be provisioned
region: us-east-2
# ECS cluster name where services will be deployed
ecs_cluster_name: <CLUSTER_NAME>
Create ECR repositories. Navigate to the ECR dashboard and create the following repositories:
<CLUSTER_NAME>/server-db-schema
<CLUSTER_NAME>/server
<CLUSTER_NAME>/idp-db-schema
<CLUSTER_NAME>/idp
<CLUSTER_NAME>/authlete-bootstrapper
<CLUSTER_NAME>/console
<CLUSTER_NAME>/nginx
<CLUSTER_NAME>/alpine
<CLUSTER_NAME>/valkey
(Optional, you can disable the valkey
option in the all.yml
file)NOTE: Keep in mind that the namespace
in the repository name must be the same as your <CLUSTER_NAME>
.
To correctly setup the ecr_repo_url
, copy/paste the URI of one of your ECR repositories, and remove the repository name at the end (the final result should end with amazonaws.com
)
Use the following command to list your repositories.
aws ecr describe-repositories | grep repositoryArn
In the group_vars/aws_ecs.yml
file, replace the value of repo
. Make sure not to configure the container images and tags.
#### #### #### #### ####
# ECR Container Images
# Dont change the Container images or tags, unless specified by support@authlete.com or
# unless mentioned in our deployment guide
repo: 123456789.dkr.ecr.us-east-2.amazonaws.com
From your AWS ECS cluster, go to Tasks and follow instructions to create your task role
and execution
.
When your task is launched, find and replace the following AWS resource identifiers with your AWS ECS values.
The execution_role_arn
and task_role_arn
values can be found under the task definitions.
The subnet_id
values can be found under the tasks.
In the group_vars/aws_ecs.yml
file, configure the following:
# IAM role for ECS tasks to pull images and write logs
execution_role_arn: arn:aws:iam::123455673100:role/ecsTaskExecutionRole
# IAM role for ECS tasks to access AWS services at runtime
task_role_arn: arn:aws:iam::123455673100:role/ecsTaskRole
# Security group ID attached to the ECS service for network rules
security_group_id: sg-0f8bef00
# Subnet ID (within a VPC) where ECS tasks will be launched
subnet_id: subnet-4af11000
In the AWS EC2 dashboard, under Load Balancing
, create the following target groups:
authlete-console-tg
authlete-idp-tg
authlete-api-tg
authlete-proxy-tg
, Health check path=/healthauthlete-valkey-tg
Make sure these target groups should be of type IP address
and use the corresponding container port number.
Finally, in the group_vars/aws_ecs.yml
file, replace the following target group ARNs with your AWS EC2 resource identifiers.
# Application Load Balancer (ALB) target groups per microservice
api_target_group_arn: arn:aws:elasticloadbalancing:us-west-1:123456789012:targetgroup/my-api-tg/00be0deb21695fcde5
idp_target_group_arn: arn:aws:elasticloadbalancing:us-west-1:123456789012:targetgroup/my-idp-tg/abcd1234ijkl5678
console_target_group_arn: arn:aws:elasticloadbalancing:us-west-1:123456789012:targetgroup/my-console-tg/abcd1234mnop5678
proxy_target_group_arn: arn:aws:elasticloadbalancing:us-west-1:123456789012:targetgroup/my-proxy-tg/abcd1234qrst5678
valkey_target_group_arn: arn:aws:elasticloadbalancing:us-west-1:123456789012:targetgroup/my-valkey-tg/abcd1234qrst5876
Create the following load balancers:
authlete-lb
idp-lb
console-lb
proxy-lb
Set the following fields for each of the load balancers:
Navigate to the newly created CloudMap namespace
and create the following services
:
Replace the following AWS resource identifiers with your AWS Cloud Map values.
To list any AWS services, use the following:
aws servicediscovery list-services
In the group_vars/aws_ecs.yml
file, configure the following:
# Service discovery (Cloud Map) registry ARNs for internal DNS-based discovery
api_cloudmap_registry_arn: arn:aws:servicediscovery:us-west-1:123456789012:service/srv-xxxxxxxxxxxxxxxx
idp_cloudmap_registry_arn: arn:aws:servicediscovery:us-west-1:123456789012:service/srv-yyyyyyyyyyyyyyyy
console_cloudmap_registry_arn: arn:aws:servicediscovery:us-west-1:123456789012:service/srv-zzzzzzzzzzzzzzzz
To correctly set this up, copy/paste your CloudMap namespace
ARN, then replace namespace/...
with service/{SERVICE_ID}
for each registry
In the AWS CloudWatch
dashboard, under Logs, create a new log group with a name of your choosing.
In the group_vars/aws_ecs.yml
file:
# CloudWatch Logs group name for ECS container logs
aws_logs_groups: /ecs/authlete
Create an AWS RDS Database with the following configurations:
Before starting the installation, ensure that both of the following databases are created.
create database authlete;
create database idp;
In the group_vars/aws_ecs.yml
file, configure the following EFS values:
# Amazon EFS (Elastic File System) configuration for persistent shared storage
efs_filesystem_id: fs-02e6123720a9550b5
efs_access_point_id: fsap-02e6123720a9550b5
efs_access_point_arn: arn:aws:elasticfilesystem:us-west-1:123456789012:file-system/fs-02e6123720a9550b5
efs_filesystem_id_valkey: fs-0ba6a21018f65eaas
efs_access_point_id_valkey: fsap-007fd448ce3basass
efs_access_point_arn_valkey: arn:aws:elasticfilesystem:us-west-1:123456789012:file-system/fs-0ba6a21018f65eaas
#### #### #### #### ####
The Ansible playbook automates the transfer of Authlete-provided container images to a customer’s own container registry. This avoids direct runtime dependency on Authlete’s registry, and ensures reproducible deployments.
group_vars/aws_ecs.yml
file.Image | Description | Supported Version Tags |
---|---|---|
server | Core API server that handles OAuth 2.0 and OpenID Connect operations | 3.0.11 |
server-db-schema | Database schema initialization tool for the API server | v3.0.11 |
idp | Identity Provider server for user authentication and management | 1.0.5 |
idp-db-schema | Database schema initialization tool for the IDP server | v1.0.5 |
console | React based management console for platform configuration and monitoring | v1.0.5 |
nginx | Nginx-based reverse proxy for handling TLS termination and routing | 1.26.3 |
valkey | Caching service for improved performance and reduced database load | 8.0.1 |
alpine | A minimal Docker image based on Alpine Linux, designed for security, simplicity, and resource efficiency. Commonly used as a base image for lightweight containers. | 3.18 |
authlete-bootstrapper | Initialization service for the platform. Only used during first deployment. | 1.0.0 |
images:
api:
name: server
tag: "3.0.11"
server_db_schema:
name: server-db-schema
tag: "v3.0.11"
idp:
name: idp
tag: "1.0.5"
idp_db_schema:
name: idp-db-schema
tag: "v1.0.5"
console:
name: console
tag: "1.0.5"
bootstrapper:
name: authlete-bootstrapper
tag: "1.0.0"
proxy:
name: nginx
tag: "1.26.3"
valkey:
name: valkey
tag: "8.0.1"
proxy_sidecar:
name: alpine
tag: "3.18"
#### #### #### #### ####
Configure Authlete URLs:
Make sure you follow the required format to avoid errors:
In the group_vars/aws_ecs.yml
file, configure the following:
authlete_api_url
authlete_idp_base_url
authlete_idp_console_url
#### #### #### #### ####
#### URLs to be used (without http:// or https://)
authlete_api_url: "<api_url>" # eg: authlete-api.example.com
authlete_idp_base_url: "<idp_url>" # eg: authlete-idp.example.com
authlete_idp_console_url: "<console_url>" # eg: authlete-console.example.com
url_scheme: "https" # either http or https
#### #### #### #### ####
Configure your admin_user_email
and admin_user_password
In the group_vars/aws_ecs.yml
file, configure the following:
# It create's necessary configuartions for Authlete to start
admin_user_email: "<admin_email>" # eg: admin@example.com, Admin email used by the bootstrapper to create or log into the Authlete system
admin_user_password: <admin_password> # Corresponding password for the admin user (used during initial setup only)
boot_task_def: bootstrapper-task # ECS task definition name for the bootstrapper container/task
Log into the Authlete console and find the Authlete Organization Id
and Organization Token
for the following step.
Configure your Authlete Organization Id
and Organization Token
in the group_vars/all.yml
file:
authlete_org_id: "<org_id>"
authlete_org_token: "<org_token>"
Now that you have created and configured all the required AWS resources, you will need to update values in the Ansible group_vars so that your installation correctly points towards them.
Run the environment validation playbook to verify that all required AWS infrastructure is in place. However, you may choose to skip this step as it will be automatically executed during the installation process.
Run the following ansible playbook command:
ansible-playbook playbooks/site.yml --tags prereq --skip-tags=install,uninstall,upgrade,rollingupgrade
If the output displays failed=0
, it means this step is completed.
ansible-playbook playbooks/site.yml --tags aws,install --skip-tags upgrade,uninstall,rollingupgrade
ansible-playbook playbooks/site.yml --tags aws,upgrade --skip-tags install,uninstall,rollingupgrade
ansible-playbook playbooks/site.yml --tags aws,uninstall --skip-tags install,upgrade,rollingupgrade
ansible-playbook playbooks/site.yml --tags api,install_api --skip-tags uninstall,upgrade,rollingupgrade
ansible-playbook playbooks/site.yml --tags api,uninstall_api --skip-tags install,upgrade,rollingupgrade
ansible-playbook playbooks/site.yml --tags api,rollingupgrade --skip-tags install,upgrade,uninstall
ansible-playbook playbooks/site.yml --tags idp,install_idp --skip-tags uninstall,rollingupgrade
ansible-playbook playbooks/site.yml --tags idp,uninstall_idp --skip-tags upgrade,rollingupgrade
Rollling upgrade:
ansible-playbook playbooks/site.yml --tags idp,rollingupgrade --skip-tags upgrade,uninstall
ansible-playbook playbooks/site.yml --tags console,install_console --skip-tags uninstall,rollingupgrade
ansible-playbook playbooks/site.yml --tags console,uninstall_console --skip-tags upgrade,rollingupgrade
ansible-playbook playbooks/site.yml --tags console,rollingupgrade --skip-tags upgrade,uninstall