Skip to main content

Imperative Environment Provisioning (Scripted)

⚠️ NOTE: This approach is the "Legacy" method using shell scripts and direct gcloud commands. For new environments (Staging, Prod), please refer to Declarative Provisioning (Terraform), which is the recommended approach for consistency and GitOps-style management.

Environment Management Overview

This playbook provides instructions on how to manage the full-stack application environments

The tech stack consistes of the following resources:

  1. A GCP project container
  2. A Cloud Run service to host the gRPC server
  3. A Cloud Run service to host the WebSocket chat server (Issue #270 - real-time streaming)
  4. A Cloud Run service to host the ESPv2 Envoy Proxy for gRPC-web, REST API transcoding and Google Cloud Endpoints features.
  5. A GCS bucket serving as a stateful database.
  6. A Firebase project associated with the GCP project
  7. A Firebase hosting site (one for Material 2 and one for Material 3 + Angular UIs)
  8. A Firebase Web App configuration with Authentication.

Step by Step Process

Step-1: Create a new GCP Project

TODO: Make the projects contained within a GCP folder as part of the codetricks.org organization.

Create the GCP Project

The projects have a naming convention of construction-code-expert-${ENV}

ENV=demo

GCP_PROJECT_ID=construction-code-expert-${ENV}
gcloud projects create construction-code-expert-${ENV} \
--name="construction-code-expert-${ENV}" \
--set-as-default

Note: The billing account association is performed in the "Enable Required APIs" section below.

Create the Service Account

The Cloud Run service will run on behalf of a service account that will be granted permission to:

  1. Check Access Control List group membership in @codetricks.org Google Workspace domain, for application access allowlist.
  2. Read/write permission to the GCS Buckets in the given GCP project.
  3. Access Firestore for task progress tracking and real-time updates.
  4. Invoke Cloud Run Jobs for long-running PDF processing and code analysis tasks.
  5. Access Vertex AI services for Gemini Pro model inference, embeddings, and vector search.
  6. Manage Firebase Authentication for user authentication and RBAC.
  7. Create service account tokens for authentication workflows.
  8. Access BigQuery for analytics and data processing operations.
# Specify Service Account Identifiers
SERVICE_ACCOUNT_ID=cce-app-service
SERVICE_ACCOUNT_DISPLAY_NAME="Construction Code Expert Application Service"

# Create the Service Account
gcloud iam service-accounts create "${SERVICE_ACCOUNT_ID}" \
--project="${GCP_PROJECT_ID}" \
--display-name="${SERVICE_ACCOUNT_DISPLAY_NAME}"

# Download Service Account credentials
SERVICE_ACCOUNT_EMAIL=${SERVICE_ACCOUNT_ID}@${GCP_PROJECT_ID}.iam.gserviceaccount.com
SECRETS_FOLDER_PATH=.secrets/credentials
OUTPUT_KEY_FILE_PATH=.secrets/credentials/${GCP_PROJECT_ID}.${SERVICE_ACCOUNT_ID}.json
mkdir -p "${SECRETS_FOLDER_PATH}"

# Create Service Account Key
gcloud iam service-accounts keys create ${OUTPUT_KEY_FILE_PATH} \
--iam-account=${SERVICE_ACCOUNT_EMAIL} \
--project=${GCP_PROJECT_ID}

Enable Required APIs

# Link project to billing account
BILLING_ACCOUNT_ID=018A1F-2219A5-D47906 # CodeProof.app billing account
gcloud billing projects link ${GCP_PROJECT_ID} \
--billing-account=${BILLING_ACCOUNT_ID}

# Enable required APIs
# Admin SDK API - for checking Google Workspace group membership for access control
gcloud services enable admin.googleapis.com --project=${GCP_PROJECT_ID}
# Firestore API - for task progress tracking and real-time UI updates
gcloud services enable firestore.googleapis.com --project=${GCP_PROJECT_ID}
# Vertex AI API - for Gemini Pro model inference, embeddings, and vector search
gcloud services enable aiplatform.googleapis.com --project=${GCP_PROJECT_ID}

# Cloud Run deployment APIs - required for gRPC service deployment
# Cloud Run API - for deploying and managing Cloud Run services and jobs
gcloud services enable run.googleapis.com --project=${GCP_PROJECT_ID}
# Cloud Build API - for building container images from source code
gcloud services enable cloudbuild.googleapis.com --project=${GCP_PROJECT_ID}
# Artifact Registry API - for storing and managing container images
gcloud services enable artifactregistry.googleapis.com --project=${GCP_PROJECT_ID}

# ESPv2 deployment APIs - required for API gateway and proxy deployment
# Service Control API - for ESPv2 proxy to report API usage and metrics
gcloud services enable servicecontrol.googleapis.com --project=${GCP_PROJECT_ID}
# Cloud Endpoints API - for managing API configurations and service descriptors
gcloud services enable endpoints.googleapis.com --project=${GCP_PROJECT_ID}

Create Shared Artifact Registry Repository

Note: This is a one-time setup performed once for all environments.

Our new deployment workflow uses a centralized, shared Artifact Registry to store Docker images. This repository should be created in a dedicated common project (construction-code-expert-repo).

  1. Create the shared GCP project:

    gcloud projects create construction-code-expert-repo --name="CCE Common Artifacts"
  2. Enable the Artifact Registry API in the shared project:

    gcloud services enable artifactregistry.googleapis.com --project=construction-code-expert-repo
  3. Create the shared Docker repository:

    gcloud artifacts repositories create custom-docker-image-repo \
    --repository-format=docker \
    --location=us-central1 \
    --description="Shared Docker repository for the Construction Code Expert application" \
    --project=construction-code-expert-repo

Grant Permission to Service Account

# Grant IAM permissions to the service account
# The service account needs the following permissions:

# 1. GCS Bucket Access - for reading/writing architectural plans and generated content
gcloud projects add-iam-policy-binding ${GCP_PROJECT_ID} \
--member=serviceAccount:${SERVICE_ACCOUNT_EMAIL} \
--role="roles/storage.objectAdmin" \
--condition=None

# 2. Cloud Run Jobs Invocation - for triggering long-running PDF processing and code analysis jobs
gcloud projects add-iam-policy-binding ${GCP_PROJECT_ID} \
--member=serviceAccount:${SERVICE_ACCOUNT_EMAIL} \
--role="roles/run.invoker" \
--condition=None

# 3. Cloud Run Developer - for broader Cloud Run service management capabilities
gcloud projects add-iam-policy-binding ${GCP_PROJECT_ID} \
--member=serviceAccount:${SERVICE_ACCOUNT_EMAIL} \
--role="roles/run.developer" \
--condition=None

# 4. Firestore Access - for task progress tracking and real-time UI updates
gcloud projects add-iam-policy-binding ${GCP_PROJECT_ID} \
--member=serviceAccount:${SERVICE_ACCOUNT_EMAIL} \
--role="roles/datastore.user" \
--condition=None

# 5. Vertex AI Access - for Gemini Pro model inference, embeddings, and vector search
gcloud projects add-iam-policy-binding ${GCP_PROJECT_ID} \
--member=serviceAccount:${SERVICE_ACCOUNT_EMAIL} \
--role="roles/aiplatform.user" \
--condition=None

# 6. Firebase Authentication Management - for user authentication and RBAC
gcloud projects add-iam-policy-binding ${GCP_PROJECT_ID} \
--member=serviceAccount:${SERVICE_ACCOUNT_EMAIL} \
--role="roles/firebase.admin" \
--condition=None

gcloud projects add-iam-policy-binding ${GCP_PROJECT_ID} \
--member=serviceAccount:${SERVICE_ACCOUNT_EMAIL} \
--role="roles/firebase.viewer" \
--condition=None

# 7. Service Account Token Creator - for token generation and impersonation capabilities during testing.
gcloud projects add-iam-policy-binding ${GCP_PROJECT_ID} \
--member=serviceAccount:${SERVICE_ACCOUNT_EMAIL} \
--role="roles/iam.serviceAccountTokenCreator" \
--condition=None

# 8. BigQuery Data Editor - for LLM log traces, analytics and data processing (future)
gcloud projects add-iam-policy-binding ${GCP_PROJECT_ID} \
--member=serviceAccount:${SERVICE_ACCOUNT_EMAIL} \
--role="roles/bigquery.dataEditor" \
--condition=None

# 9. Allow your user account to impersonate the service account (for deployment)
# NOTE: This is only needed if your user account doesn't have Project Owner or Editor roles.
# Users with Owner/Editor roles already have the necessary permissions to impersonate service accounts.
# Replace YOUR_EMAIL with your actual Google account email
gcloud iam service-accounts add-iam-policy-binding ${SERVICE_ACCOUNT_EMAIL} \
--member="user:YOUR_EMAIL" \
--role="roles/iam.serviceAccountUser" \
--project=${GCP_PROJECT_ID}

# 10. Grant Service Control API permission for ESPv2 proxy
# This is required for ESPv2 proxy to report API usage metrics and handle service management
# We grant this to our custom service account (which ESPv2 will use)
gcloud projects add-iam-policy-binding ${GCP_PROJECT_ID} \
--member="serviceAccount:${SERVICE_ACCOUNT_EMAIL}" \
--role="roles/servicemanagement.serviceController" \
--condition=None

# 11. Grant Cloud Run Service Agent permission to pull from Shared Artifact Registry
# The Google-managed Cloud Run service agent for this project needs permission to pull
# container images from the shared `construction-code-expert-repo` project.

# First, get the project number of the current environment project
PROJECT_NUMBER=$(gcloud projects describe ${GCP_PROJECT_ID} --format="value(projectNumber)")

# Construct the service agent's email address
CLOUD_RUN_SERVICE_AGENT="service-${PROJECT_NUMBER}@serverless-robot-prod.iam.gserviceaccount.com"

# Grant the 'Artifact Registry Reader' role to the service agent on the shared repo
gcloud artifacts repositories add-iam-policy-binding custom-docker-image-repo \
--location=us-central1 \
--project=construction-code-expert-repo \
--member="serviceAccount:${CLOUD_RUN_SERVICE_AGENT}" \
--role="roles/artifactregistry.reader"

Deprecated Service Account Configuration

⚠️ DEPRECATED: The following service account configuration is outdated and should not be used for new environments. See GitHub Issue #188 for migration details.

For existing environments: You can temporarily override the service account name in your env/${ENV}/setvars.sh file during migration:

# OLD SERVICE ACCOUNT IDENTIFIERS (DO NOT USE FOR NEW ENVIRONMENTS)
# export SERVICE_ACCOUNT_ID="google-groups-member-checker"
# export SERVICE_ACCOUNT_DISPLAY_NAME="Google Groups Member Checker"

Step-2: Create environment folder

mkdir env/${ENV}

and create a env/${ENV}/setvars.sh file with the following contents:

# Define the environment name to be used as suffix for key variables
ENV=demo

# Service Account Configuration (optional overrides)
# For new environments, use the new service account name:
export SERVICE_ACCOUNT_ID="cce-app-service"
export SERVICE_ACCOUNT_DISPLAY_NAME="Construction Code Expert Application Service"

# For existing environments during migration, you may temporarily use:
# export SERVICE_ACCOUNT_ID="google-groups-member-checker"
# export SERVICE_ACCOUNT_DISPLAY_NAME="Google Groups Member Checker"

# Get the directory where this script is located
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
source "${SCRIPT_DIR}/../common/setvars.sh"

Also create:

  1. env/${ENV}/gcp/cloud-run/grpc/setvars.sh
  2. env/${ENV}/gcp/cloud-run/grpc/vars.yaml
mkdir -p env/${ENV}/gcp/cloud-run/grpc

Step-3: Deploy GRPC services to Cloud Run

# Build from source
# Skip Unit Tests as they make a sizable number of LLM inference calls
mvn clean package -DskipTests

ENV="demo" # use stg or prod for staging or production correspondingly
SERVICE_NAME="construction-code-expert-${ENV}"

# Load environment variables
source env/${ENV}/gcp/cloud-run/grpc/setvars.sh

# Note that ./Dockerfile uses the pre-built .jar file from the ./target directory.
# Note the increased memory (1Gi vs 512Mi default) is required to accommodate processing large PDF files.
# TODO: Consider splitting GRPC services across different Cloud Run hosts with right-sized memory requirements.
# https://cloud.google.com/sdk/gcloud/reference/run/deploy
gcloud run deploy "${SERVICE_NAME}" \
--region=${GCP_LOCATION} \
--project=${GCP_PROJECT_ID} \
--env-vars-file=env/${ENV}/gcp/cloud-run/grpc/vars.yaml \
--allow-unauthenticated \
--service-account=cce-app-service@${GCP_PROJECT_ID}.iam.gserviceaccount.com \
--memory=4Gi \
--cpu=8 \
--source .

Retrieve the Service URL

gcloud run services describe "${SERVICE_NAME}" \
--region=${GCP_LOCATION} \
--project=${GCP_PROJECT_ID} \
--format='value(status.url)'

Step-3.5: Deploy Cloud Run Jobs for Long-Running Tasks

Cloud Run Jobs are used for processing tasks that exceed the 15-minute timeout limit of Cloud Run Services, such as PDF ingestion and code applicability analysis.

Build Project with Custom JARs

# Build from source with all custom JARs for Cloud Run Jobs
# Skip Unit Tests as they make a sizable number of LLM inference calls
mvn clean package -DskipTests

This creates specialized JAR files:

  • construction-code-expert-plan-ingestion-job-*-jar-with-dependencies.jar - For PDF processing
  • construction-code-expert-code-applicability-job-*-jar-with-dependencies.jar - For code analysis

Deploy All Cloud Run Jobs

ENV="demo" # use stg or prod for staging or production correspondingly

# Load environment variables
source env/${ENV}/gcp/cloud-run/grpc/setvars.sh

# Deploy all Cloud Run Jobs (default behavior)
cli/sdlc/cloud-run-job/deploy.sh ${ENV}

Deploy Specific Cloud Run Jobs

# Deploy only plan ingestion job (PDF processing)
cli/sdlc/cloud-run-job/deploy.sh plan-ingestion ${ENV}

# Deploy only code applicability job (V2 analysis)
cli/sdlc/cloud-run-job/deploy.sh code-applicability ${ENV}

Cloud Run Jobs Configuration

Job TypePurposeMemoryCPUTimeoutFeatures
plan-ingestionPDF processing & page extraction4Gi8 cores60 minTesseract OCR, PDF handling
code-applicabilityV2 code analysis with LLM4Gi2 cores30 minContext caching, batch processing

Verify Deployment

# List all Cloud Run Jobs
gcloud run jobs list --region=${GCP_LOCATION} --project=${GCP_PROJECT_ID}

# Describe specific job
gcloud run jobs describe construction-code-expert-${ENV}-plan-ingestion \
--region=${GCP_LOCATION} --project=${GCP_PROJECT_ID}

gcloud run jobs describe construction-code-expert-${ENV}-code-applicability \
--region=${GCP_LOCATION} --project=${GCP_PROJECT_ID}

Test Cloud Run Jobs

# Test plan ingestion job
gcloud run jobs execute construction-code-expert-${ENV}-plan-ingestion \
--region=${GCP_LOCATION} --project=${GCP_PROJECT_ID} \
--args="test-task-id,R2024.0091-2024-10-14,21_Wiggins_DCA_Progress_Set_10142024-1.pdf,6"

# Test code applicability job
gcloud run jobs execute construction-code-expert-${ENV}-code-applicability \
--region=${GCP_LOCATION} --project=${GCP_PROJECT_ID} \
--args="test-task-id,R2024.0091-2024-10-14,6,2217,5"

Step-4: Deploy ESPv2 Envoy Proxy to Cloud Run

Reserve ESPv2 Service Hostname

Provision additional settings in the environment folder: env/${ENV}/gcp/cloud-run/endpoints/setvars.sh

Leave this line blank, we will populate it after we reserve the hostname

# Backend Cloud Run ESPv2 (Envoy Proxy) Hostname (requires advance reservation)
export CLOUD_RUN_ESP2_HOSTNAME=""
# Load the Environment Variables
source env/${ENV}/gcp/cloud-run/endpoints/setvars.sh

# Deploy a dummy service to reserve the hostname
# TODO: Rename the var to CLOUD_RUN_ESP2_SERVICE_NAME to disambiguate.
gcloud run deploy ${CLOUD_RUN_SERVICE_NAME} \
--region=${GCP_LOCATION} \
--image="gcr.io/cloudrun/hello" \
--allow-unauthenticated \
--platform managed \
--project=${GCP_PROJECT_ID}

Retrieve the Service URL

# First get the full URL of the Cloud Run ESPv2 Service Host
CLOUD_RUN_ESPV2_URL=$(gcloud run services describe "${CLOUD_RUN_SERVICE_NAME}" \
--region=${GCP_LOCATION} \
--project=${GCP_PROJECT_ID} \
--format='value(status.url)')

# Strip off the https:// prefix
CLOUD_RUN_ESPV2_HOSTNAME=${CLOUD_RUN_ESPV2_URL#https://}

Use the URL to determine the hostname (less https:// prefix) to populate it back into env/${ENV}/gcp/cloud-run/endpoints/setvars.sh

Deploy the actual ESPv2 Cloud Run Service

Create env/${ENV}/gcp/cloud-run/endpoints/api_config.yaml and populate the ESPv2 hostname and the backend gRPC Cloud Run hostname from the previous steps correspondingly.

cd env
./deploy-endpoints.sh ${ENV}

Step-5: Configure the GCS Bucket

# Create gs://construction-code-expert-demo bucket in construction-code-expert-demo project
source env/${ENV}/gcp/cloud-run/grpc/setvars.sh

# Create the new bucket
gcloud storage buckets create gs://${GCP_GCS_BUCKET_NAME} \
--project=${GCP_PROJECT_ID} \
--location=${GCP_LOCATION} \
--uniform-bucket-level-access

# Configure CORS policy for direct file uploads from frontend
gcloud storage buckets update gs://${GCP_GCS_BUCKET_NAME} \
--cors-file=env/${ENV}/gcp/gcs/cors-config.json

# Populate the bucket with seed data
gcloud storage cp --recursive gs://construction-code-expert-dev/* \
gs://construction-code-expert-demo

This process may take a long time (hours). For example a copy from dev as a template into demo took

Completed files 35338/35338 | 1.2GiB/1.2GiB | 4.4MiB/s

Note: The CORS configuration allows the frontend to upload files directly to Cloud Storage using signed URLs, which bypasses Cloud Run's 32MB request size limit for large file uploads.

Step-5.5: Setup Google Maps API (Optional - for Address Features)

Purpose: Configure Google Maps JavaScript API and Places API for intelligent address autocomplete and map visualization in the Project Settings UI.

When to run: Required only if deploying the Google Maps integration feature (Issue #227 Phase 1.5).

# Automated setup using provisioning script
cli/sdlc/new-environment-provisioning/setup-google-maps-api.sh ${ENV}

What it configures:

  • Enables Google Maps JavaScript API and Places API
  • Creates restricted API key with domain and API restrictions
  • Stores API key in Secret Manager
  • Grants service account access to secret

Manual steps after script:

  1. Update Cloud Run service to mount secret:

    gcloud run services update construction-code-expert-${ENV} \
    --region=${GCP_LOCATION} \
    --project=${GCP_PROJECT_ID} \
    --update-secrets=GOOGLE_MAPS_API_KEY=google-maps-api-key:latest
  2. Update frontend environment configuration with API key

  3. Set API quotas in GCP Console (recommended: 10k req/day)

  4. Enable billing alerts (recommended: $50/month)

Related Documentation:

Skip this step if: You're not using the address autocomplete and map features.

Step-5.6: Deploy Firestore Indexes and Rules

Purpose: Deploy Firestore database indexes and security rules to ensure optimal query performance and proper access control.

When to run: Required for all environments to enable proper database functionality.

# Deploy both Firestore indexes and rules
./cli/sdlc/firestore/deploy.sh ${ENV}

# Deploy only indexes (if rules are already deployed)
./cli/sdlc/firestore/deploy.sh ${ENV} --indexes-only

# Deploy only rules (if indexes are already deployed)
./cli/sdlc/firestore/deploy.sh ${ENV} --rules-only

What gets deployed:

  • Firestore Indexes: Composite indexes from web-ng-m3/firestore.indexes.json for efficient querying
  • Firestore Rules: Security rules from web-ng-m3/firestore.rules for access control

Prerequisites:

  • Firebase project created and configured
  • Firebase CLI authenticated
  • Environment configuration files in env/${ENV}/

Integration with other deployments:

  • Automatically included in full-stack deployment
  • Automatically included in frontend deployment
  • Can be run independently for index-only updates

Step-6: Deploy the UI to Firebase

Create Hosting Targets

The Firebase hosting configuration is already set up in web-ng-m3/firebase.json with targets for dev, demo, prod, and test environments.

Note: If setting up a new environment, the firebase hosting:sites:create and firebase target:apply commands below will automatically add the following configuration block to web-ng-m3/firebase.json:

{
"target": "${ENV}",
"public": "dist-${ENV}",
"ignore": [
"firebase.json",
"**/.*",
"**/node_modules/**"
],
"rewrites": [
{
"source": "**",
"destination": "/index.html"
}
]
}

For example, when setting up the test environment, this block will be automatically added:

{
"target": "test",
"public": "dist-test",
"ignore": [
"firebase.json",
"**/.*",
"**/node_modules/**"
],
"rewrites": [
{
"source": "**",
"destination": "/index.html"
}
]
}

No manual editing of the JSON file is required - the Firebase CLI handles this automatically.

# Create the Firebase Project for the existing GCP project
# Note: This must be done through Firebase Console as gcloud doesn't support this operation
# Go to: https://console.firebase.google.com/

# Create the Firebase Site for Material Design M3
firebase_site_name=${GCP_PROJECT_ID}-m3
firebase hosting:sites:create --project ${GCP_PROJECT_ID} ${firebase_site_name}

# Add Hosting Site to a Firebase Target
# https://firebase.google.com/docs/cli/targets
cd web-ng-m3
firebase target:apply --project ${GCP_PROJECT_ID} hosting ${ENV} ${firebase_site_name}

Create Firebase Web App

# Use the app name same as hosting site name.
firebase apps:create --project ${GCP_PROJECT_ID} web ${firebase_site_name}

# Remove color codes https://superuser.com/questions/380772/removing-ansi-color-codes-from-text-stream
FIREBASE_APP_ID=$(firebase apps:list --project=${GCP_PROJECT_ID} \
| grep "${firebase_site_name}" \
| awk -F '│' '{print $3}' \
| sed -e 's/\x1b\[[0-9;]*m//g' \
| grep -oE '(\d|\w|:)+')

# Generate Firebase configuration for M3 (not M2)
firebase apps:sdkconfig WEB ${FIREBASE_APP_ID} > ../env/${ENV}/firebase/m3/firebaseConfig.json

Update the authDomain in ../env/${ENV}/firebase/m3/firebaseConfig.json:

"authDomain": "${firebase_site_name}.web.app"

Build and Deploy Angular Material M3 Application

# Navigate to the M3 web application folder
cd web-ng-m3

# Load environment variables
source ../env/${ENV}/setvars.sh
source ../env/${ENV}/firebase/m3/setvars.sh

# Install dependencies (if not already done)
npm install

# Clone googleapis if not already present
if [[ ! -d "../env/dependencies/googleapis" ]]; then
git clone https://github.com/googleapis/googleapis ../env/dependencies/googleapis
fi

# Generate gRPC sources
protoc -I=../src/main/proto \
-I=../env/dependencies/googleapis \
--js_out=import_style=commonjs,binary:src/generated.commonjs \
--grpc-web_out=import_style=typescript,mode=grpcwebtext:src/generated.commonjs \
../src/main/proto/*.proto \
../env/dependencies/googleapis/google/type/date.proto \
../env/dependencies/googleapis/google/api/*.proto

# Build for the specific environment
npm run build:${ENV}

# Deploy to Firebase hosting
firebase deploy --project=${GCP_PROJECT_ID} --only hosting:${ENV}

# Deploy Firestore indexes and rules
./cli/sdlc/firestore/deploy.sh ${ENV}

Configure Authentication

Note: The following steps must be performed manually in the Firebase and Google Cloud consoles. This is a one-time setup for each new environment and cannot be automated with the Firebase or gcloud CLIs due to security and compliance requirements.

For reference see also: https://github.com/sanchos101/construction-code-expert/issues/112

This process enables Google Sign-In for your application and configures the OAuth consent screen that users see when they first sign in.

Enable Google Sign-in: https://console.firebase.google.com/project/construction-code-expert-$\{ENV\}/authentication/providers

Enter the following:

Add Authorized domains at: https://console.firebase.google.com/project/construction-code-expert-$\{ENV\}/authentication/settings

Add Authorized domains at: https://console.cloud.google.com/auth/branding?project=construction-code-expert-$\{ENV\}

Add "Authorized JavaScript Origins" and "Authorized redirect URLs" to the client created by Firebase: https://console.cloud.google.com/auth/clients?project=construction-code-expert-$\{ENV\}

Grant Group Checker Service Account access

On behalf of admin@codetricks.org Google Workspace admin, grant the permission to service account cce-app-service@construction-code-expert-${ENV}.iam.gserviceaccount.com to read group members in the codetricks.org Google Workspace domain

Configure Domain-Wide Delegation

Go to https://admin.google.com/ac/owl/domainwidedelegation and allow the service account to use an API scope of https://www.googleapis.com/auth/admin.directory.group.member.readonly

Admin Role Assignment to service Account

Go to https://admin.google.com/ac/roles/41442010053214214/admins and assign Admin Role > Groups Reader role to the service account (by client ID number).

For more information see https://github.com/sanchos101/construction-code-expert/issues/110