- Edge Parameter Store
This repository contains the Edge Parameter Store, a Django application that is built to hold parameter data for deployments of Google Distributed Cloud Connected clusters where the number of clusters may scale to tens of thousands. It is meant to be deployed in support of other solutions and tools (in this Github organization) and as part of a broader suite of solutions. This tool conveys the following immediate benefits:
- Stores cluster parameters specifically matching a retail edge use case
- Supporting integration with cluster provisioner
- Supporting integration with hydrator
- Integrates with Google IAP for authentication and authorization
- Supports granular role-based access controls
- Provides a REST API
- Supports data validation workflows
Go through all of the steps below to get started
- Preparation
- Build Image
- Deploy to GCP
This section outlines steps that the user must perform
- Login GCP Console and select your target project
- Navigate to 'APIs & Services' -> 'OAuth consent screen' in the GCP Console.
Configure the OAuth consent screen as External and set the publishing status to In Production.
Choose a fully qualified domain name for your application. The Terraform provides an opinionated configuration for the utilization of your FQDN, creating a managed zone in the deployment project. This assumes you will create a delegated zone in your application project to which an A-record for the EPS app's load balancer will be created. In doing so, it wires the application up end-to-end. If this is not your desired configuration, you will need to modify the provided Terraform here
Set your selected fully-qualified domain name in terraform.tfvars:
app_fqdn = "eps.cloud.corpdomain.com"
Add this name or a wild card name to the CSRF trusted list, which is variable csrf_trusted_origins in terraform.tfvars file. e.g.
csrf_trusted_origins = ["eps.cloud.corpdomain.com"]
When using the provided Terraform, a TLS certificate is automatically generated and associated with the EPS load balancer.
Providing and associating your own externally-managed TLS certificate is technically possible however the provided example Terraform does not support this configuration and would require modification.
We will build a Docker image and push to an Artifact Registry in the target deployment GCP project. Please make sure the Artifact Registry is available and the GCP account running the build script has sufficient permissions to push images to it.
-
Clone this project and go to the project workspace
git clone https://github.com/GDC-ConsumerEdge/parameter-store.git cd parameter-store
-
Update
terraform.tfvars
with your correct values. -
build the docker image and push to Artifact Registry
./build.sh [VERSION] [APP]
- Both VERSION and APP are optional. VERSION default to 0.1, while APP default to parameter-store.
- The generated image is tagged as
${REPO_HOST}/${PROJECT_ID}/${REPO_FOLDER}/${APP}:v${VERSION}
e.g.gcr.io/test-proj-1234/parameter-store/parameter-store:v0.1
- The
latest
tag is always attached to the most recently built image
Use Terraform to deploy parameter store infrastructure.
To deploy the parameter store for the first time, initialize the Terraform state.
- go to
examples/terraform
folder
cd examples/terraform
- If using local state file, e.g. for local development work, simply run
terraform init
- For better collaboration, we typically need to save the Terraform state file to a shared location, e.g. a GCP
bucket. To achieve this, we need to configure a GCP bucket as terraform backend.
- Create or use an existing GCS bucket. Make sure your current account has write permission to it.
- Create a backend configuration file (e.g., env/testing.gcs.tfbackend) with content like this:
bucket = "my-gcs-bucket-for-tfstate" prefix = "eps/testing"
- Initialize Terraform with this backend configuration: terraform init -backend-config=env/testing.gcs.tfbackend
terraform init -backend-config=env/testing.gcs.tfbackend
The full set of Terraform variables are defined in variables.tf, though we will call out notable items here.
eps_project_id
is where the app (and nearly all) of its resources will be created. It is assumed this project already
exists.
secrets_project_id
is the ID of a Google Cloud Project where app-related secrets are to be configured. This may be
the same as the eps_project_id
or yet another separate project.
eps_image
is the full name and tag of the image to be deployed by Terraform to Cloud Run on its first invocation.
terraform_principal
is the principal associated with the EPS Terraform deployment and is very likely a Google service
account.
iap_audience
is the "audience" against which IAP JWT tokens are compared. This comes from the backend service
associated with your load balancer stack. This value is not known until Terraform is run the first time, so on its first
invocation, an empty string (""
) is appropriate. **After deploying infrastructure, you must fill in this value and
deploy it a second time. This value takes the form of:
"/projects/${EPS project ID}/us-central1/backendServices/${EPS load balancer backend service number}"
You can find this number by running the following example:
gcloud compute backend-services describe eps-lb-backend-service \
--region=us-central1 \
--format="value(id)"
6252277272778218001
superusers
is an array of users that will automatically receive EPS "superuser" permissions upon first login. This may
be omitted, in which case superusers will need to be configured manually or at all. EPS should have at
least 1 superuser. Because EPS uses IAP its identities (usernames) are email address configured in Google Identity. EPS
uses the username portion of the email address as its username. Simply drop the @mycorp.com
.
eps_allowed_accessors
is an array of IAM principals that will be granted Cloud Run invoker and IAP accessor
permissions. This does not grant in-app permissions but merely allows these identities web access through IAP and into
Cloud Run. This is probably a group of users the membership of which is managed externally to Terraform.
worker_pool_name
is a string used as a name of the private worker pool that Cloud Build will use to execute the build.
This is part of the _PRIVATE_POOL
substitution in the cb.tf.
db_password_key
is a string used as a name or identifier of the secret in Secret Manager that stores the database
password. This is passed as a substitution _DATABASE_PASSWORD_KEY
to the cb.tf.
instance_connection_name
is a string used as a connection name for the Cloud SQL instance. This is used by the Cloud
SQL Proxy to connect to the database. Passed as _INSTANCE_CONNECTION_NAME
in the cb.tf.
artifact_registry_project_id
is a string value for Google Cloud Project ID where the Artifact Registry is located.
Passed as _ARTIFACT_REGISTRY_PROJECT_ID
in the cb.tf.
artifact_registry_repo
is a string used as a name of the repository within Artifact Registry where images will be
stored/pulled. Passed as _ARTIFACT_REGISTRY_REPO
in the cb.tf.
app_image_name
is a string used as a name of the application image to be built or used. Passed as _APP_IMAGE_NAME
in
the cb.tf.
git_repo_url
is string value for URL of the Git repository that Cloud Build will clone. Passed as _GIT_REPO_URL
in
the cb.tf.
git_user_email
is a string value for email address to be configured for Git operations within the build environment.
Passed as _GIT_USER_EMAIL
in the cb.tf.
git_user_name
is a string value for username to be configured for Git operations within the build environment. Passed
as _GIT_USER_NAME
in the cb.tf.
environment_name = "dev"
eps_project_id = "example-eps"
secrets_project_id = "example-eps"
eps_image = "us-docker.pkg.dev/example-eps/hsp/parameter_store:v15"
terraform_principal = "serviceAccount:[email protected]"
app_fqdn = "example.eps.corp.net"
csrf_trusted_origins = ["localhost"]
iap_audience = "/projects/22368248810/us-central1/backendServices/506473743633145264"
superusers = ["example"]
eps_allowed_accessors = ["group:[email protected]"]
worker_pool_name = "eps-private-pool"
db_password_key = "eps-db-pass" # Or fetch from a secure source if needed at plan time
instance_connection_name = "example-eps:us-central1:eps-015b"
artifact_registry_project_id = "example-eps"
artifact_registry_repo = "eps"
app_image_name = "parameter_store"
git_repo_url = "https://github.com/example-eps/parameter-store.git"
git_user_email = "[email protected]"
git_user_name = "example-eps-gituser"
Use terraform plan
to check the deployment.
terraform plan
Use terraform apply
to deploy the app.
terraform apply
The IAP audience is not known until after the first run due to unfortunate cycles in the Terraform dependency graph.
Take the output value of jwt_audience
and set as the variable iap_audience
. This value is available as a Terraform
output after the first successful apply. You can retrieve it using terraform output jwt_audience
. It should look
something like:
/projects/22368248810/us-central1/backendServices/6252277272778218001
When done, rerun terraform apply
.
terraform destroy
This command will tear down all the GCP resources provisioned by Terraform. Note that resources not managed by Terraform (e.g., GCS buckets for state if manually created and not imported) may require separate manual deletion.
This Cloud Build pipeline is designed to automate the process of building the EPS application, managing its database schema changes (migrations), and ensuring that these schema changes are version-controlled alongside your application code.
Files used are cb.tf and cloudbuild.yaml
cb.tf automates the deployment of a Cloud Build CI/CD pipeline by:
Establishing Secure GitHub Integration: It creates a connection to your GitHub repository using the Cloud Build GitHub App and a stored OAuth token for authenticated access.
Granting Necessary Permissions: It assigns the Cloud Build service agent permissions to manage secrets (specifically for the GitHub token) ensuring it can operate correctly.
Defining Repository and Trigger: It links your specific GitHub repository to Cloud Build and sets up a trigger that
automatically starts a build process on pushes to the main
branch.
Configuring Build Execution: It specifies that builds will run using a designated private worker pool and a
dedicated service account, with build steps and parameters defined in an external cloudbuild.yaml
file (customized via
substitutions).
cloudbuild.yaml file defines a multi-step build process including:
Download Cloud SQL Proxy (download-proxy): Fetches the specified version of the Cloud SQL Proxy binary required for database connectivity.
Make Cloud SQL Proxy Executable (chmod-proxy): Sets execution permissions for the downloaded Cloud SQL Proxy.
Build Temporary Docker Image (build-temp-image): Creates an initial Docker image of the application tagged as
:temp
, to be used for running subsequent tasks like migrations.
Run Database Migrations (run-migrations): Using the temporary image, starts the Cloud SQL Proxy, then runs Django's
makemigrations
to generate new database migration files for parameter_store
and api
apps, and copies these new
migration files to the shared /workspace
.
Check Copied Migrations (check-copied-migrations): Verifies that the database migration files generated in the
previous step have been successfully copied to the /workspace
.
Build Final Docker Image (build-final-image): Builds the definitive application Docker image, incorporating any new migration files, and tags it with the current Git commit SHA.
Push Final Image (push-final-image-latest): Pushes the final, commit-SHA-tagged Docker image to the specified Google Artifact Registry repository.
Commit Migrations (commit-migrations): Clones the application's Git repository, copies the newly generated migration
files from /workspace
into it, and then commits and pushes these files to a new branch in the remote Git repository.
- After initial deployment there is no user configuration for EPS if Terraform
superusers
variable was set - All other users will initially have minimum permissions, allowing them to log into the system but not perform other actions.
- The superuser will assign permissions to users.
- Users may log in at least once so that the superuser can see them in the system.
- Alternatively, the superuser may prepopulate users based on user identity information. Please refer to users-and-permissions.md for more information.
EPS can support data loading while running in Cloud SQL. Terraform sets up EPS infrastructure to deploy Cloud SQL with fully private networking, making it impossible to access as an operator without resorting to creative hacks. To make this less challenging, there is an optional data loading mechanism using Cloud Build and private workers, which is temporarily provisioned in the EPS network with access to Cloud SQL via a private networking path.
The Terraform folder contains optional resources to stand up infrastructure for a Cloud Build data loader pipeline. If you do not want these resources, you should comment out or delete the resources in this folder. In addtion to this file, there are references to these resources elsewhere — search the folder for TODO lines related to the data loader and remove them.
In the examples/data_loader folder there are two files:
- cloudbuild-data-loader.yaml - a cloud build pipeline which runs a data loader script
- load_db.py - an example data loader script
The load_db.py
file is an example one-time data loader. It provides a command-line interface to load source of truth
files (CSVs) into the application, and to add some basic validators. This example file demonstrates how to load the
database with clusters and validators and should be modified to the specific use case.
The build file is expected to be submitted by a user withCloud Build Submitter IAM role (or equivalent permissions)
to the Cloud Build API. To get started, copy the load_db.py
file into an empty directory and colocate SoT files
adjacent to it.
mkdir db_loader
cp examples/data_loader/* db_loader
cp *.csv db_loader # these are the source of truth files, make sure they end up adjacent to the load_db.py
cd data_loader
Next, edit the cloudbuild-data-loader.yaml file. Be sure to update the following values in that file wherever you see a
TODO: update me
comment.
Then submit the build job:
gcloud beta builds submit . --config=cloudbuild-data-loader.yaml --region=us-central1
Cloud Build will then kick off and run the data loader script, but not before establishing a private connection from its
private pool to the private IP of the database. The "Source of Truth (SoT) files" will be read, prepared, and loaded
into the database. The data loader script can be modified bespoke to your use case and even extended so that it may be
run more than once. Note it also has a --wipe
flag so that you can erase user-loaded data and start from a fresh
dataset if needed.
- Install PostgreSQL 16
- Log in to PostgreSQL as a Superuser: Use the psql command-line utility to log in as the postgres user.
psql -U postgres
If this doesn't work, try this (this is common on Debian and perhaps other Linux distros):
sudo -u postgres psql
- Create a New Database: Create a new database named eps.
CREATE DATABASE eps;
- Create a New User: Create a new user named eps with a specified password. Replace
your_password
with a strong password of your choice.
CREATE USER eps WITH PASSWORD 'your_password';
- Change Ownership of the Database: Alter the ownership of the
eps
database to the new usereps
.
ALTER DATABASE eps OWNER TO eps;
- Grant Necessary Privileges to the User: Grant the necessary permissions for the eps user to manage objects within the eps database.
Please copy these one-by-one to the shell; they do not copy well en masse.
-- Connect to the database named 'eps'
\c eps;
-- Grant usage on the schema 'public' to 'eps'
GRANT USAGE ON SCHEMA public TO eps;
-- Grant create privileges on the schema 'public' to 'eps'
GRANT CREATE ON SCHEMA public TO eps;
-- Grant all privileges on all tables in the schema 'public' to 'eps'
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO eps;
-- Grant all privileges on all sequences in the schema 'public' to 'eps'
GRANT ALL PRIVILEGES ON ALL SEQUENCES IN SCHEMA public TO eps;
-- Grant privileges to create and manage tables within the 'public' schema
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT ALL ON TABLES TO eps;
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT ALL ON SEQUENCES TO eps;
-
Install Python 3.12
-
Create virtualenv
python3.12 -m venv .venv
source .venv/bin/activate
- Install dev requirements
pip3 install -r requirements-dev.txt
pip3 install -r requirements.txt
- Run Django Migrations
python manage.py makemigrations
python manage.py migrate
- Collect static files
python manage.py collectstatic
- Create a Superuser: Create a superuser for accessing the Django admin interface:
python manage.py createsuperuser
- Start the Development Server: Run the Django development server to check if everything is working fine:
export DJANGO_DEBUG=True
python manage.py runserver
Please see docs for more documentation:
Sometimes Django doesn't seem to pick up the models for parameter_store
, so I have to makemigrations
explicitly for
it:
python3 manage.py makemigrations parameter_store
python3 manage.py migrate
If successful, it looks something like:
$ python3 manage.py makemigrations parameter_store
Migrations for 'parameter_store':
parameter_store/migrations/0001_initial.py
+ Create model Cluster
+ Create model GlobalRole
+ Create model Group
+ Create model Tag
+ Create model ClusterFleetLabel
+ Create model ClusterIntent
+ Create model ClusterRole
+ Add field group to cluster
+ Create model GroupRole
+ Create model ClusterTag
$ python3 manage.py migrate
Operations to perform:
Apply all migrations: admin, auth, contenttypes, parameter_store, sessions
Running migrations:
Applying parameter_store.0001_initial... OK
I have Postgres running on a remote Linux device (called cloudtop
) while I develop locally, so I port-forward to psql
on that device:
ssh -TL 5432:localhost:5432 cloudtop
This project is not an official Google project. It is not supported by Google and Google specifically disclaims all warranties as to its quality, merchantability, or fitness for a particular purpose.