Skip to content

Terraform configuration for provisioning an infrastructure to run ElectricSQL.

License

Notifications You must be signed in to change notification settings

electric-sql/terraform-aws

Repository files navigation

ElectricSQL on Amazon ECS

Terraform configuration for provisioning an ECS cluster to run ElectricSQL behind an Application Load Balancer, connected to an instance of RDS for PostgreSQL.

Warning

This Terraform configuration is a work in progress. You should review it carefully before using it in a production setting.

Please let us know if you notice any bugs, missing configuration or poorly chosen defaults. See "Contributing" and "Support" sections at the bottom.

Overview

Running terraform apply for this configuration without any modifications will provision the following infrastructure:

  • a new VPC with two private and two public subnets
  • an instance of RDS for PostgreSQL that has logical replication enabled
  • Electric sync service running the electricsql/electric:latest image from Docker Hub as a Fargate task on ECS
  • an Application Load Balancer with an HTTP and an HTTPS listener, both routing to the default port of the Electric sync service container

Things you can customize with input variables:

  • the name of each logical component
  • database credentials
  • Electric's Docker image tag, etc.

Note

When building this infrastructure from scratch for the first time, you will need to perform some manual steps, including initializing the remote state for Terraform and requesting a TLS certificate from AWS Certificate Manager. See the next section for a complete walkthrough.

Usage

Initial setup

To set up a new infra from scratch, follow these steps:

  1. Sign in to AWS CLI and input your access key id, secret key and region.
aws configure --profile '<profile-name>'
  1. Initialize the provider and local modules.
terraform init
  1. Copy the terraform.tfvars.example file and edit the variable values in it to match your preferences. Use the same <profile-name> you specified above for the profile variable in your terraform.tfvars file.
cp terraform.tfvars.example terraform.tfvars
  1. Request a TLS certificate from AWS Certificate Mananger, e.g. via the AWS console (https://console.aws.amazon.com/acm/home). You will need to provide a domain name, such as my-electric-sync-service.example.com. Keep a note of this as you'll create a CNAME for it below once you know the load balancer's hostname. (This is different from the validation CNAME you add in the next step).

  2. Verify your ownership of the domain by adding a validation CNAME record to your domain on the website you use to manage your DNS records. This is so that AWS can validate the certificate request and issue the certificate. You can find the "CNAME name" and "CNAME value" to use in the "Domains" section of the certificate page once you've created it. (If you don't see the information in the table, scroll right!).

  3. Use the ARN of the newly issued certificate as the value for the top-level tls_certificate_arn variable in your terraform.tfvars file.

  4. Provision the infrastructure.

terraform apply
  1. Once the load balancer is up an running, create another new CNAME record on your domain using with the domain you chose for your certificate as the name and the load balancer's generated domain name as the value. Here's how it might look in Namecheap's advanced DNS management view:

CNAME in Namecheap

  1. Try sending an HTTP request to your custom domain to verify that it's working:
$ curl -i https://sync.aws-testing.example.com/v1/health
HTTP/2 200
date: Thu, 14 Nov 2024 11:28:57 GMT
content-type: application/json
content-length: 19
vary: accept-encoding
cache-control: no-cache, no-store, must-revalidate
x-request-id: GAfSPDjAhfDWy3QAAAXy
server: ElectricSQL/0.8.1
access-control-allow-origin: *
access-control-expose-headers: *
access-control-allow-methods: GET, HEAD

{"status":"active"}

Updating the sync service

To upgrade the running Electric sync service to a new version of the Docker image, stop the current task and wait for the ECS scheduler to start a new task automatically. Every time a new Fargate task is started, it pulls the latest Docker image matching the configured tag from Docker Hub. The image tag to use is specified in module.ecs_task_definition in main.tf.

# Make sure you have the correct profile selected in your terminal
export AWS_DEFAULT_PROFILE='<profile-name>'

# Replace cluster_name below with your actual ECS cluster name
cluster_name=electric_ecs_cluster
aws ecs stop-task --cluster $cluster_name --task $(
  aws ecs list-tasks --cluster $cluster_name --query 'taskArns[0]' --output text
)

To update the sync service's configuration, edit module.ecs_task_definition.container_environment in main.tf and rerun the provisioning command:

terraform apply

Components

The configuration in this repo is split into a set of local modules which service as more-or-less self-contained logical units, making it easier to read through and modify. They are not meant to be used as standalone building blocks for other projects.

Included modules:

  • vpc - custom VPC for the RDS instance, the ECS task, and the Network Load Balancer
  • rds - instance of RDS for Postgres with logical replication enabled
  • ecs_task_definition - Fargate task for the Electric sync service based on the Docker Hub image
  • ecs_service - custom ECS cluster with one Fargate service that uses the task definition from above
  • load_balancer - Application Load Balancer for SSL termination and routing traffic to the sync service's HTTP port

Input variables

Each local module defines a set of variables, allowing one to modify its internal configuration. Not all of those variables are exposed in the top-level variables.tf file since they have default values suitable for provisioning a test infra. If you want to set custom values for some of those module variables, reference them in the main.tf file directly or add new variable definitions to the top-level variables.tf to use in module arguments in the main.tf file.

Use cases

Connecting to existing RDS instance

TODO...

If you already have an externally-managed RDS instance that has logical replication enabled, you can add its connection URI as an input variable to this configuration and reference it in the DATABASE_URL config passed to the ecs_task_definition's container_environment.

You will need to import the existing VPC in that case and adjust the CIDR blocks, instead of creating a brand new VPC to be managed by the Terraform configuration.

terraform import vpc ... subnets ...

Connecting to external database

If the database you want Electric to connect to is running in a VPC you don't have control over or even outside of AWS, you can pass its full connection URI to the DATABASE_URL config. The database must either be accessible over the public Internet, or you need to create the necessary network link between your VPC and the external database.

Importing existing VPC

TODO...

Importing existing TLS certificates

TODO...

Contributing

See the Community Guidelines including the Guide to Contributing and Contributor License Agreement.

Support

We'd be happy to learn about your experience using this Terraform configuration and adapting it to your needs! We have an open community Discord. Come and say hello and let us know if you have any questions or need any help getting things running.

About

Terraform configuration for provisioning an infrastructure to run ElectricSQL.

Resources

License

Stars

Watchers

Forks

Languages