Engineering Playbook
This playbook helps contributing software engineers navigate the codebase.
Related Playbooks
- PRD/TDD Feature Development Workflow: Comprehensive workflow for planning complex features with Product Requirements Documents (PRDs) and Technical Design Documents (TDDs), including automation scripts and GitHub issue creation
- Protocol Buffers and gRPC Best Practices: Design-driven development with proto definitions, enum annotations, JSON serialization, and API testing
- Local gRPC Server: Running and testing gRPC services locally
- gRPC-Gateway: HTTP/JSON transcoding setup for REST API access
- Software Engineering Principles: Core engineering values and practices
Feature Development Lifecycle
This section outlines the standard workflow for implementing features, from branching to cleanup, designed for both human engineers and AI agents.
0. Authentication & Configuration
The development environment is pre-configured with git and gh CLI tools.
Authentication:
- Git Operations: Authenticated via SSH (
~/.ssh/id_ed25519). Ensure permissions are set correctly (chmod 600) if setting up manually. - GitHub CLI: Authenticated via
$GH_TOKEN. Agents should verify access withgh auth status.
Remote Tracking: To ensure proper tracking (and to fix "Publish Branch" issues in IDEs), ensure your remote origin is set to the SSH URL:
git remote set-url origin git@github.com:sanchos101/construction-code-expert.git
1. Branching Strategy
We use feature branches for all changes. The naming convention is:
feat/<issue-id>-<short-description>
Example:
# For Issue #313: Add Logo to Top App Bar
git checkout -b feat/313-add-logo-to-top-app-bar
2. Implementation & Commits
Use Conventional Commits to keep history clean and readable.
Format: <type>: <description> (Issue #<id>)
Types:
feat: New featurefix: Bug fixdocs: Documentation onlystyle: Formatting, missing semi-colons, etc.refactor: Code change that neither fixes a bug nor adds a featuretest: Adding missing tests
Example:
# Commit your changes with a conventional message
git commit -m "feat: add logo to top app bar (Issue #313)"
# Push your feature branch to the remote repository
git push origin feat/313-add-logo-to-top-app-bar
3. GitHub Workflow
Use the gh CLI to streamline your workflow.
Create a Pull Request:
gh pr create --title "feat: add logo to top app bar" --body "Fixes #313"
Update Issue Status: You can comment on issues directly from the CLI:
gh issue comment 313 --body "Implemented logo changes. Verifying in test environment."
4. Cleanup (Best Practices)
We follow a rigorous cleanup process to keep the repository and local environment healthy.
Step 1: Automate Remote Deletion (One-time Setup)
- Go to GitHub Repository > Settings > General > Pull Requests.
- Check "Automatically delete head branches".
- Why: This keeps the remote repository clean automatically after merges.
Step 2: Clean Up Local References (The "Prune") After a branch is merged and deleted on GitHub, you must tell your local git that it's gone.
# Fetch latest and prune deleted remote branches
git fetch --prune
# Short: git fetch -p
Step 3: Bulk Delete Local Merged Branches
Don't delete branches one by one. Use this safety-checked command to delete all branches that have been merged into main:
git branch --merged | grep -v "^\*" | grep -v "main" | xargs git branch -d
Breakdown:
git branch --merged: Lists branches merged into current HEAD.grep -v "^\*": Excludes the branch you are currently looking at (marked with *).grep -v "main": Protectsmainfrom being deleted.xargs git branch -d: Deletes the remaining local branches safely.
Note: The lower case -d ensures only fully merged branches are deleted. Remote branches are deleted automatically by GitHub (see Step 1).
Manual Remote Deletion (If Step 1 is disabled): If you need to manually delete a remote branch:
git push origin --delete <branch-name>
Step 4: Summary Routine (Daily/Weekly)
git checkout main
git pull
git fetch -p
# Run the bulk delete command above
Agents: Should execute these cleanup steps as part of their session teardown or after successful PR merges.
Planning New Features
Before implementing a new feature, use the PRD/TDD Workflow for comprehensive planning and documentation.
Quick Decision Guide
Use PRD/TDD Workflow if your feature:
- Affects multiple components (frontend + backend)
- Requires new APIs or external integrations
- Has cost implications (cloud services, APIs)
- Needs security review (API keys, permissions)
- Will be implemented in multiple phases
Skip for:
- Simple bug fixes
- Minor UI tweaks
- Documentation updates
- Single-component changes
The Five-Phase Process
- Initial Proposal: Start with a brief description and gather context
- Create PRD: Document WHAT to build and WHY (product-focused, no verbose code)
- Create TDD: Document HOW to build it (complete implementation code)
- Create Automation: Build scripts for deployment (reduce manual steps)
- Create GitHub Issue: Track implementation with comprehensive checklist
Compounding Effect
Each new feature builds on prior work:
Issue #227: Project Metadata Management
├── Created: ProjectAddress message
├── Implemented: Address editing
└── Delivered: Project settings UI
↓ (builds on)
Issue #236: Google Maps Integration
├── Extends: ProjectAddress with lat/lng
├── Enhances: Address entry with autocomplete
└── Adds: Map visualization
↓ (future)
Issue #XXX: 3D Flyover View
├── Uses: Existing geocoded addresses
├── Enhances: Visual confirmation
└── Adds: Terrain and context visualization
Benefits:
- Each phase delivers working features
- Documentation network grows richer
- Patterns emerge and get reused
- Context is preserved across iterations
Real Example: Issue #236 (Google Maps Integration)
- Planning time: ~1 hour
- Documentation: 3,200+ lines across 6 files
- Automation: 90% of deployment automated
- Dependencies: Built on Issue #227
- Result: Implementation-ready with complete specs
See PRD/TDD Workflow Playbook for complete details and examples.
Workstation setup
- Download the source code from GitHub repository
git clone https://github.com/sanchos101/construction-code-expert
cd construction-code-expert
- Compile the code
# Compile only (faster, skips packaging and tests)
mvn clean compile
# Or build the complete package, skipping tests
# Note: Skip unit tests as they make a sizable number of LLM inference calls
mvn clean package -DskipTests
If you just want to regenerate Java Classes for the Protobuf files, run:
mvn clean protobuf:compile protobuf:compile-custom
📘 See Also: Protocol Buffers and gRPC Best Practices for comprehensive guidance on proto-first design, enum annotations, JSON serialization, and API testing.
Note on Build Process: The project uses a thin JAR + separate dependencies approach for optimized Cloud Run deployments (see GitHub Issue #191). The default build creates a thin JAR with dependencies copied to target/dependency/. It is highly recommended to use the -DskipTests flag to avoid running LLM-intensive unit tests during local builds.
mvn clean package -DskipTests: Builds the thin JAR and copies dependencies totarget/dependency/(default, used for gRPC service)mvn clean package -DskipTests -P cloud-run-jobs: Additionally builds 3 separate job JARs for Cloud Run Jobs (plan-ingestion, code-applicability, compliance-report)
Maven Build Profiles: Maven profiles are a way to customize the build process for different scenarios or environments. They allow you to conditionally activate or deactivate build plugins, dependencies, and configurations. In this project, the grpc-service profile is active by default (no -P flag needed), while the cloud-run-jobs profile must be explicitly activated with -P cloud-run-jobs to generate the additional job JARs. Profiles are defined in the <profiles> section of pom.xml and can include different plugin configurations, dependencies, or build steps that only execute when that profile is active.
Deployment: The project now uses Docker images pushed to Google Artifact Registry rather than uploading source code, significantly improving deployment speed.
Code Style and Formatting
The project uses Spotless (auto-formatting) and Checkstyle (style validation) to maintain Google Java Style Guide compliance.
Quick Commands
# Auto-format files you're working on
export JAVA_HOME=/usr/lib/jvm/temurin-23-jdk-arm64
mvn spotless:apply -DspotlessFiles=src/main/java/path/to/YourFile.java
# Check style violations
mvn checkstyle:check -Dcheckstyle.includes="**/YourFile.java"
# Format entire codebase (use cautiously)
mvn spotless:apply
# Check all violations
mvn checkstyle:check
Recommended Workflow
When working on code, format the files you're modifying before committing:
# Format your changes
git diff --name-only | grep '\.java$' | while read file; do
mvn spotless:apply -DspotlessFiles="$file"
done
# Verify no major violations
mvn checkstyle:check
📖 For complete usage, troubleshooting, and advanced workflows, see Checkstyle & Spotless Guide
Working with GCP Environments
The project utilizes several Google Cloud Platform (GCP) environments for development, testing, and production. The currently provisioned environments are dev, test, and demo, with stg and prod planned for the future. You can find detailed configurations for each environment in the env/${ENV} directories.
Firebase Configuration
Firebase Web API Keys (for token generation):
- Test:
env/test/firebase/m3/setvars.secrets.sh - Dev:
env/dev/firebase/m3/setvars.secrets.sh - Demo:
env/demo/firebase/m3/setvars.secrets.sh
These keys are used by the Firebase token generator (firebase-token-generator/generate-token.sh) for integration testing.
Local Development Setup
When working on your local workstation, it is standard practice to connect to the dev GCP environment to access backend services like Vertex AI. The following commands will configure your local gcloud CLI to point to the dev project.
# Initialize Google Cloud CLI and authenticate
gcloud components update
gcloud auth application-default login
# Set the project ID for local development
gcloud config set project construction-code-expert-dev
gcloud auth application-default set-quota-project construction-code-expert-dev
# Enable the Vertex AI API on the project
gcloud services enable aiplatform.googleapis.com
Once your project is configured, you can initialize your shell session with the dev environment's variables:
source env/dev/setvars.sh
If you are using your own GCP project for local development, you can create a env/local/setvars.sh file with your project's specific settings and add it to .gitignore.
AI Software Engineering Agent Environment
A dedicated test GCP environment has been provisioned specifically for the AI SWE Agent. The agent operates on behalf of the ai-swe-agent@construction-code-expert-test.iam.gserviceaccount.com service account, which has been granted the necessary permissions to build and deploy services within this project. The agent is encouraged to perform all integration testing within this test environment to ensure that deployments and new features are validated in a clean, consistent setting that closely mirrors production.
The agent is predominantly operated through the Cursor IDE, which runs in a defined development container. The configuration files for this environment are located in the .devcontainer/ directory. Within this container, the service account is authenticated using Application Default Credentials (ADC), allowing seamless interaction with GCP services.
For complete AI agent provisioning instructions, see:
- AI Agent Setup Guide - Comprehensive IAM permissions and Secret Manager access
- GitHub Issue #187 - Original provisioning issue with step-by-step commands
Quick Summary of Permissions:
- Storage, Firestore, Vertex AI, Firebase (application access)
- Cloud Run, Cloud Build, Artifact Registry (deployment)
- Service Management, Service Usage (API management)
- Secret Manager (API key retrieval - Issue #236+)
- IAM service account impersonation (testing)
- Logging (build and deployment logs)
Generating Bearer Tokens for Frontend Testing
To perform authenticated actions in frontend tests, the agent needs to act on behalf of a real user. For this purpose, the ai-swe-agent-test@codetricks.org Google Workspace user has been created.
The agent can generate a Firebase ID (Bearer) token for this test user by running the following command from the workspace root:
# Generate token for test environment
./firebase-token-generator/generate-token.sh --env test ai-swe-agent-test@codetricks.org
# Generate token for dev environment
./firebase-token-generator/generate-token.sh --env dev sanchos101@gmail.com
# Generate token for demo environment
./firebase-token-generator/generate-token.sh --env demo sanchos101@gmail.com
How It Works:
The token generator automatically detects its environment and uses the appropriate authentication method:
- Devcontainer - Uses ADC with service account credentials (via
GOOGLE_APPLICATION_CREDENTIALS) - Host Machine - Uses service account key files for maximum reliability
Environment-Specific Setup:
Devcontainer (No setup required):
- Already configured with
GOOGLE_APPLICATION_CREDENTIALSpointing to the test environment service account - Works out-of-the-box for the test environment
Host Machine Setup:
For host machines, you need service account key files for each GCP project you work with:
# Required service account key files (download from GCP Console -> IAM & Admin -> Service Accounts)
.secrets/credentials/construction-code-expert-test-firebase-adminsdk-serviceAccountKey.json
.secrets/credentials/construction-code-expert-dev-firebase-adminsdk-serviceAccountKey.json
.secrets/credentials/construction-code-expert-demo-firebase-adminsdk-serviceAccountKey.json
Benefits of This Approach:
✅ Environment Detection: Automatically uses the right authentication method
✅ Multi-Project Support: Easy switching between dev/test/demo environments
✅ Reliable Authentication: Service account keys work consistently across all environments
✅ Auto-Configuration: Automatically sources the correct API keys and environment variables
✅ Clear Interface: --env flag makes it simple to target any environment
Automated Frontend Authentication for Cypress Tests
🎯 Key Achievement: We've successfully implemented programmatic authentication for Cypress E2E tests, enabling full-stack UI testing without manual login.
How It Works:
- Application Detects Cypress: The Angular app checks for
window.Cypressand skips automatic redirects during tests - Token Generation: Cypress automatically invokes the existing
./firebase-token-generator/generate-token.shscript for each test - Programmatic Login: Cypress uses the generated ID tokens to establish authenticated sessions
- Preserved Auth State: Tests navigate using clicks rather than page reloads to maintain authentication
Critical Implementation Notes:
- Test-Aware Auth Service: Added
cypressLogin()method tofirebase-auth.service.tsthat accepts ID tokens and sets user state directly - Navigation Strategy: Use
cy.contains().click()instead ofcy.visit()after authentication to preserve session state - Timing: Always add
cy.wait(3000)after login to allow authentication state to propagate through the application
Example Test Pattern:
it('should test authenticated functionality', () => {
cy.visit('/');
cy.get('body').should('not.contain.text', 'Initializing Application', { timeout: 30000 });
cy.loginByFirebase('ai-swe-agent-test@codetricks.org', 'construction-code-expert-test');
cy.wait(3000);
cy.contains('ProjectName').click(); // Preserves auth state
cy.contains('Protected Content').should('be.visible');
});
Deployment Requirements: The frontend application must be deployed to the test environment with the authentication modifications before running Cypress tests:
cd web-ng-m3 && ./deploy.sh test
Running Cypress End-to-End Tests
The Cypress test suite now supports fully automated authentication without manual token management.
Automated Authentication Workflow:
- Dynamic Token Generation: Tests automatically generate fresh tokens using the
cy.loginByFirebase()command, which internally calls the same./firebase-token-generator/generate-token.shscript used for manual token generation - Programmatic Login: The application's test-aware authentication establishes user sessions programmatically using the generated ID tokens
- Preserved Sessions: Navigation within tests maintains authentication state
# Navigate to the web UI directory
cd web-ng-m3
# Run authenticated Cypress tests against the deployed test environment
npx cypress run --spec "cypress/e2e/**/*.cy.ts" --browser chromium --config baseUrl=https://construction-code-expert-test-m3.web.app
Test Requirements:
- The frontend application must be deployed to the
testenvironment with authentication modifications - The
ai-swe-agent-test@codetricks.orguser must exist in Firebase with appropriate permissions - Tests should use
cy.loginByFirebase()for authentication and avoidcy.visit()after login
This workflow enables the AI SWE Agent to perform comprehensive end-to-end testing of authenticated features, including project management, file uploads, compliance analysis, and user interface interactions, all in a fully automated, secure environment.
Plan Review CLI Commands
The project includes a comprehensive CLI for architectural plan review operations. Here are basic examples:
Create a New Project
Create a new architectural plan project container:
cli/codeproof.sh architectural-plan create \
--project US.CA.SanJose-1550-Tech.Dr-rev2002-07-09 \
--name "1550 Tech Drive, San Jose" \
--description "Commercial building renovation project" \
--filesystem GCS
cli/codeproof.sh architectural-plan create \
--project US.CA.SanJose-1550-Tech.Dr-rev2002-07-09 \
--name "1550 Tech Drive, San Jose" \
--description "Commercial building renovation project" \
--filesystem LOCAL
Upload Files to Project
Before ingesting pages, you must first upload PDF files to the project's inputs folder.
GCS Filesystem (Cloud Storage):
# Upload a single file
cli/codeproof.sh architectural-plan upload \
--project US.CA.SanJose-1550-Tech.Dr-rev2002-07-09 \
--filesystem GCS \
"Building-Plans.pdf"
# Upload multiple files to the same project
cli/codeproof.sh architectural-plan upload \
--project US.CA.SanJose-1550-Tech.Dr-rev2002-07-09 \
--filesystem GCS \
"Architectural-Plans.pdf" \
"Structural-Plans.pdf" \
"Electrical-Plans.pdf"
LOCAL Filesystem:
# Upload from local file paths
cli/codeproof.sh architectural-plan upload \
--project US.CA.SanJose-1550-Tech.Dr-rev2002-07-09 \
--filesystem LOCAL \
"/home/user/plans/Architectural-Plans.pdf" \
"/home/user/plans/Structural-Plans.pdf"
The uploaded files will be stored in the project's inputs/ folder and can then be referenced by filename during ingestion.
Ingest Plan Pages
Process and extract content from specific pages of uploaded PDF files.
GCS Filesystem (Cloud Storage):
cli/codeproof.sh architectural-plan ingest \
--project US.CA.SanJose-1550-Tech.Dr-rev2002-07-09 \
--pages "1,2,3" \
--filesystem GCS \
"Building-Plans.pdf"
LOCAL Filesystem:
cli/codeproof.sh architectural-plan ingest \
--project US.CA.SanJose-1550-Tech.Dr-rev2002-07-09 \
--pages "1,2,3,4,5" \
--filesystem LOCAL \
"Architectural-Plans.pdf"
Note: The filename refers to a file already uploaded to the project's inputs/ folder. For multi-file projects, run the ingest command separately for each file.
Generate Compliance Reports
source env/dev/setvars.sh
cli/codeproof.sh code-compliance \
--model gemini-2.5-flash-preview-05-20 \
--page-number 7 \
--book-id 2217 \
--relevance-filter HIGH \
--section-prefix IBC2021P2_Ch11_
📖 For complete CLI reference, workflows, and troubleshooting, see CLI Plan Review Commands
Generating gRPC-Web Client Sources
The frontend Angular application uses gRPC-Web to communicate with the backend gRPC services. When protobuf definitions are updated, the frontend client sources must be regenerated.
Quick Generation
Use the dedicated helper script from the web-ng-m3 directory:
cd web-ng-m3
../cli/sdlc/utils/generate-grpc-web-sources.sh
This script:
- Checks for
protocavailability and provides installation instructions if missing - Automatically clones googleapis dependencies if not present
- Generates JavaScript and TypeScript client files in
src/generated.commonjs/ - Provides detailed output about generated files
Manual Generation (Advanced)
If you need more control over the generation process:
cd web-ng-m3
# Ensure target directory exists
mkdir -p src/generated.commonjs
# Generate sources manually
protoc -I=../src/main/proto \
-I=../env/dependencies/googleapis \
--js_out=import_style=commonjs,binary:src/generated.commonjs \
--grpc-web_out=import_style=typescript,mode=grpcwebtext:src/generated.commonjs \
../src/main/proto/*.proto \
../env/dependencies/googleapis/google/type/date.proto \
../env/dependencies/googleapis/google/api/*.proto
When to Regenerate
Regenerate gRPC-Web sources whenever:
- Proto files are modified (
.protofiles insrc/main/proto/) - New gRPC services or methods are added
- Proto message definitions change
- Frontend compilation fails with missing gRPC client types
Integration with Build Process
The generation is automatically integrated into:
- Frontend deployment:
web-ng-m3/deploy.shcalls the helper script - Development workflow: Run the helper script after proto changes
- CI/CD pipelines: Include the helper script in build steps
Handling Accidentally Committed Generated Files
Problem: Generated gRPC-web sources were accidentally committed to version control.
Symptoms:
- Large commit with 100+ files in
web-ng-m3/src/generated.commonjs/ - Files should be generated, not versioned
Solution: Undo commit, gitignore, and recommit
# 1. Undo last commit (keep changes staged)
git reset --soft HEAD~1
# 2. Unstage generated files
git reset HEAD web-ng-m3/src/generated.commonjs/
# 3. Add to gitignore (if not already there)
# Note: Should already be in web-ng-m3/.gitignore as "src/generated.commonjs/"
grep "generated.commonjs" web-ng-m3/.gitignore || echo "src/generated.commonjs/" >> web-ng-m3/.gitignore
# 4. Recommit without generated files
git add -A
git commit -m "Your commit message (no generated files)"
Explanation of Commands:
| Command | What It Does |
|---|---|
git reset --soft HEAD~1 | Undoes last commit, keeps changes staged |
git reset HEAD <path> | Unstages specific files/directories |
echo "..." >> .gitignore | Adds pattern to gitignore |
git add -A | Stages all changes (except gitignored) |
Alternative (if you've already pushed):
# Remove from git but keep local files
git rm -r --cached web-ng-m3/src/generated.commonjs/
git commit -m "Remove generated sources from version control"
git push
Prevention:
- ✅
web-ng-m3/.gitignoreshould havesrc/generated.commonjs/ - ✅ Review
git statusbefore committing (look for 100+ file changes) - ✅ Use
git add -Acarefully (it respects gitignore) - Common generated patterns already gitignored:
web-ng-m3/src/generated.commonjs/(gRPC-web TypeScript)web/src/generated.commonjs/(gRPC-web legacy)target/(Maven build output)node_modules/(npm dependencies)dist/(build artifacts)
Admin UI Development
When working on admin features (like the /admin route for RBAC management):
- Update Proto Definitions: Add new admin-related gRPC methods to
rbac.proto - Regenerate Backend: Run
mvn clean protobuf:compile protobuf:compile-custom - Implement Backend: Add service implementations in
RBACServiceImpl.java - Regenerate Frontend: Run
../cli/sdlc/utils/generate-grpc-web-sources.sh - Update Frontend: Use new gRPC client types in Angular services
- Test Compilation: Run
npm run buildto verify everything compiles
This workflow ensures both backend and frontend stay synchronized with proto changes.
Deployment and Release Management
The project includes comprehensive deployment automation for all stack components. For quick deployments:
# Deploy entire stack to dev environment (includes Firestore indexes)
# ⚠️ IMPORTANT: Run this from within the Devcontainer to avoid Angular esbuild platform errors (See Issue #399)
./cli/sdlc/full-stack-deploy.sh dev
# Deploy only gRPC backend to test
./cli/sdlc/cloud-run-grpc/deploy.sh test
# Deploy only Cloud Run Jobs
./cli/sdlc/cloud-run-job/deploy.sh dev
# Deploy only Firestore indexes and rules
./cli/sdlc/firestore/deploy.sh dev
# Deploy only Firestore indexes (skip rules)
./cli/sdlc/firestore/deploy.sh test --indexes-only
📖 For complete deployment workflows, options, troubleshooting, and best practices, see Release Management
Firestore Index and Rules Deployment
The project includes dedicated Firestore deployment automation for managing database indexes and security rules:
# Deploy both indexes and rules to dev environment
./cli/sdlc/firestore/deploy.sh dev
# Deploy only Firestore indexes (skip rules)
./cli/sdlc/firestore/deploy.sh test --indexes-only
# Deploy only Firestore rules (skip indexes)
./cli/sdlc/firestore/deploy.sh prod --rules-only
When to use Firestore deployment:
- After adding new composite indexes to
web-ng-m3/firestore.indexes.json - After updating security rules in
web-ng-m3/firestore.rules - When setting up a new environment
- When experiencing Firestore query performance issues
Integration with other deployments:
- Full-stack deployment automatically includes Firestore deployment
- Frontend deployment automatically includes Firestore deployment
- Can be run independently for index-only or rules-only updates
Frontend Build Performance Optimization
Issue: #246 - Performance optimization for deployment
The frontend build process has been optimized to avoid unnecessary Java backend compilations during every deployment.
How It Works
The web-ng-m3/scripts/generate-step-metadata.js script intelligently caches TypeScript metadata generated from the Java backend:
- Smart Caching: Only regenerates if Java source files or proto files have changed
- Performance Impact: ~90% faster builds (30-60s vs 2-5 minutes)
- Automatic Detection: No manual intervention needed for typical workflows
Build Modes
Normal Build (recommended for daily development):
npm run build:dev
# Automatically skips metadata generation if files unchanged
Force Regeneration (after backend changes affecting step metadata):
FORCE_METADATA_GEN=true npm run build:dev
# Forces Maven compilation even if files appear unchanged
Skip Metadata Generation (when using committed metadata):
SKIP_METADATA_GEN=true npm run build:dev
# Uses existing metadata file, never regenerates
Deploy Script Integration
The web-ng-m3/deploy.sh script supports these optimizations:
# Normal deployment - automatic caching
./deploy.sh dev
# Explicitly skip metadata generation
./deploy.sh dev --skip-metadata-gen
# Skip entire build (use existing dist)
./deploy.sh dev --skip-build
When to Force Regeneration
Force metadata regeneration when:
- Adding new plan ingestion step types in the backend
- Modifying proto definitions that affect step metadata
- Experiencing TypeScript compilation errors about missing step types
- The cache appears stale (rare)
Troubleshooting
Build is slow despite optimization:
- Check if source files are being modified unexpectedly
- Run
npm run prebuildseparately to see if regeneration is happening - Look for log messages indicating why regeneration was triggered
TypeScript errors about missing step types:
- Force regeneration:
FORCE_METADATA_GEN=true npm run build:dev - Verify backend compiles:
mvn compile - Check generated file exists:
web-ng-m3/src/app/shared/utils/plan-ingestion-step-metadata.ts
CI/CD performance:
- Consider committing the generated metadata file to git
- Use
SKIP_METADATA_GEN=truein CI pipelines if metadata is committed - Otherwise, let the script auto-detect (it will cache across CI runs if artifacts are preserved)
📖 See also: web-ng-m3/scripts/README.md for detailed script documentation
Working with GRPC API Service
Deploy GRPC Service locally
# Regenerate sources for proto messages and services
mvn clean protobuf:compile protobuf:compile-custom
# Build entire project from sources
mvn clean compile
# Load environment variables
source env/dev/gcp/cloud-run/grpc/setvars.sh
# Start the server
mvn exec:java -Dexec.mainClass="org.codetricks.construction.code.assistant.service.ArchitecturalPlanServer"
# Assuming server running on localhost:50051 and api.proto is in current dir or import path
# List services (if reflection enabled)
grpcurl -plaintext localhost:50051 list
# Describe the service (if reflection enabled)
grpcurl -plaintext localhost:50051 describe org.codetricks.construction.code.assistant.service.ArchitecturalPlanServer
# Install gprcurl client
brew install grpcurl
# Call the method
grpcurl -plaintext \
-import-path src/main/proto \
-proto src/main/proto/api.proto \
-d '{"architectural_plan_id": "R2024.0091-2024-10-14"}' \
localhost:50051 \
org.codetricks.construction.code.assistant.service.ArchitecturalPlanService/GetArchitecturalPlan
Deploy gRPC Backend to Cloud Run
Quick deployment to Cloud Run:
# Deploy to dev environment
./cli/sdlc/cloud-run-grpc/deploy.sh dev
# Deploy to test without rebuilding
./cli/sdlc/cloud-run-grpc/deploy.sh test --skip-build
# Force deploy from dirty working directory
./cli/sdlc/cloud-run-grpc/deploy.sh dev --force
Common Options:
--skip-build: Deploy existing image without rebuilding--skip-tests: Skip unit tests during build (default: enabled)--force: Allow deployment from dirty Git working directory
📖 For complete deployment workflows and troubleshooting, see Release Management
Deploy Cloud Run Jobs
Deploy long-running task processors:
# Deploy all job types to dev
./cli/sdlc/cloud-run-job/deploy.sh dev
# Deploy specific job type
./cli/sdlc/cloud-run-job/deploy.sh plan-ingestion dev
./cli/sdlc/cloud-run-job/deploy.sh code-applicability test
./cli/sdlc/cloud-run-job/deploy.sh compliance-report prod
Available Job Types:
plan-ingestion- PDF page extraction and processingcode-applicability- Code section applicability analysiscompliance-report- Compliance report generation
Common Options:
--skip-build: Deploy existing images without rebuilding--skip-tests: Skip unit tests during build (default: enabled)--force: Allow deployment from dirty Git working directory
📖 For job configuration, testing, and API usage, see Release Management
TODO: Add --service-account flag to the command above when authentication support is added.
--service-account google-groups-member-checker@${GCP_PROJECT_ID}.iam.gserviceaccount.com
Test GRPC call against Cloud Run Back End deployment
Set the GRPC_SERVER_HOST environment variable to the hostname of the Cloud Run deployment.
GRPC_SERVER_HOST=construction-code-expert-dev-856365345080.us-central1.run.app
# Don't use the -plaintext flag when using SSL in Cloud Run deployment.
grpcurl -import-path src/main/proto \
-proto src/main/proto/api.proto \
-d '{"architectural_plan_id": "R2024.0091-2024-10-14"}' \
${GRPC_SERVER_HOST}:443 \
org.codetricks.construction.code.assistant.service.ArchitecturalPlanService/GetArchitecturalPlan
Test Plan Page PDF retrieval
# Use `-max-msg-sz` to set the maximum message size to 10MB. Some single PDF pages can be larger than 4MB (default)
grpcurl -import-path src/main/proto \
-proto src/main/proto/api.proto \
-d '{"architectural_plan_id": "R2024.0091-2024-10-14", "page_number": 1}' \
-max-msg-sz $((10 * 1024 * 1024)) \
${GRPC_SERVER_HOST}:443 \
org.codetricks.construction.code.assistant.service.ArchitecturalPlanService/GetArchitecturalPlanPagePdf
Test RAG Search API
grpcurl -import-path src/main/proto \
-import-path env/dependencies/googleapis \
-proto src/main/proto/api.proto \
-d '{"icc_book_id": "2217", "query": "Cooling towers located on a roof of a building shall be constructed of non-combustible materials when the base area of the cooling tower is greater than how many square feet?", "max_results": 3}' \
${GRPC_SERVER_HOST}:443 \
org.codetricks.construction.code.assistant.service.ComplianceCodeSearchService/GetIccCodeSearchResults
Deploy ESPv2 to Cloud Run
⚠️ Known Issue: Issue #235 - AI SWE Agent cannot deploy ESPv2 due to missing permissions. Currently requires project owner to deploy manually.
Prerequisites
# Enable the following APIs:
gcloud services enable servicecontrol.googleapis.com --project=construction-code-expert-dev
# Get your project number
PROJECT_NUMBER=$(gcloud projects describe construction-code-expert-dev --format='value(projectNumber)')
# Grant Service Control IAM role the service account used by ESPv2
gcloud projects add-iam-policy-binding construction-code-expert-dev \
--member="serviceAccount:${PROJECT_NUMBER}-compute@developer.gserviceaccount.com" \
--role="roles/servicemanagement.serviceController"
Run the ESPv2 deployment script
cd env
./deploy-endpoints.sh dev
💡 GitOps Alternative: To build the ESPv2 image without deploying (for GitOps workflows), use
./build-espv2-image.sh dev. This builds the image and promotes it to the central Artifact Registry for later deployment. See Release Management for details.
Test GRPC call against ESPv2 deployment
Set the ESP_SERVICE_HOST environment variable to the reserved hostname of the ESPv2 deployment.
ESP_SERVICE_HOST=construction-code-expert-esp2-dev-6yieikr6ca-uc.a.run.app
Test against ESPv2 deployment
# Reserved Hostname
grpcurl -import-path src/main/proto \
-proto src/main/proto/api.proto \
-d '{"architectural_plan_id": "R2024.0091-2024-10-14"}' \
${ESP_SERVICE_HOST}:443 \
org.codetricks.construction.code.assistant.service.ArchitecturalPlanService/GetArchitecturalPlan
Test PDF Fetch API
# Use `-max-msg-sz` to set the maximum message size to 10MB. Some single PDF pages can be larger than 4MB (default)
grpcurl -import-path src/main/proto \
-proto src/main/proto/api.proto \
-d '{"architectural_plan_id": "R2024.0091-2024-10-14", "page_number": 1}' \
-max-msg-sz $((10 * 1024 * 1024)) \
${ESP_SERVICE_HOST}:443 \
org.codetricks.construction.code.assistant.service.ArchitecturalPlanService/GetArchitecturalPlanPagePdf
Test ArchitecturalPlanReviewService with the GetApplicableCodeSections RPC
grpcurl -import-path src/main/proto \
-import-path env/dependencies/googleapis \
-proto src/main/proto/api.proto \
-d '{"architectural_plan_id": "R2024.0091-2024-10-14", "page_number": 6, "icc_book_id": "2217"}' \
${ESP_SERVICE_HOST}:443 \
org.codetricks.construction.code.assistant.service.ArchitecturalPlanReviewService/GetApplicableCodeSections
Test against REST API
Set the ESP_SERVICE_HOST environment variable to the reserved hostname of the ESPv2 deployment.
ESP_SERVICE_HOST=construction-code-expert-esp2-dev-6yieikr6ca-uc.a.run.app
curl -XPOST -H "Content-Type: application/json" \
-d '{"architectural_plan_id": "R2024.0091-2024-10-14"}' \
https://${ESP_SERVICE_HOST}/org.codetricks.construction.code.assistant.service.ArchitecturalPlanService/GetArchitecturalPlan
Test PDF Fetch API
curl -XPOST -H "Content-Type: application/json" \
-d '{"architectural_plan_id": "R2024.0091-2024-10-14", "page_number": 2}' \
https://${ESP_SERVICE_HOST}/org.codetricks.construction.code.assistant.service.ArchitecturalPlanService/GetArchitecturalPlanPagePdf
Test ArchitecturalPlanReviewService with the GetApplicableCodeSections RPC
curl -XPOST -H "Content-Type: application/json" \
-d '{"architectural_plan_id": "R2024.0091-2024-10-14", page_number: 6, icc_book_id: "2217"}' \
https://${ESP_SERVICE_HOST}/org.codetricks.construction.code.assistant.service.ArchitecturalPlanReviewService/GetApplicableCodeSections
Test ArchitecturalPlanReviewService/GetIccBookTableOfContents
curl -XPOST -H "Content-Type: application/json" \
-d '{icc_book_id: "2217"}' \
https://${ESP_SERVICE_HOST}/org.codetricks.construction.code.assistant.service.ArchitecturalPlanReviewService/GetIccBookTableOfContents
curl -XPOST -H "Content-Type: application/json" \
-d '{"architectural_plan_id": "R2024.0091-2024-10-14", page_number: 6, icc_book_id: "2217"}' \
https://${ESP_SERVICE_HOST}/org.codetricks.construction.code.assistant.service.ArchitecturalPlanReviewService/GetPageComplianceReport
Test the Access Control List Service
curl -XPOST -H "Content-Type: application/json" \
-d '{"client_email": "sanchos101@gmail.com"}' \
https://${ESP_SERVICE_HOST}/org.codetricks.auth.AccessControlListService/GetAuthorizationStatus
curl -XPOST -H "Content-Type: application/json" \
-d '{"client_email": "sanchos101@gmail.com"}' \
https://${ESP_SERVICE_HOST}/v1/authorization
curl -XPOST -H "Content-Type: application/json" \
-d '{"client_email": "contact@codetricks.org"}' \
https://${ESP_SERVICE_HOST}/v1/authorization
Architectural Plan Review Artifacts Database
Architectural plan review artifacts are constructed gradually and persisted on a generic filesystem, that can be backed by either Local Disk, GCS or Cloud Firestore (coming soon). The artifacts are stored in a directory structure organized in two orthogonal hierarchies:
- By Architectural Plan Page as root grouping
- By ICC Book Section as root grouping
Plan Page Grouping (Updated for Issue #167 - Multi-File Support)
Modern Structure (Multi-File Projects):
projects/${project_id}/
└ review/
└ compliance/
└ files/ ← NEW: Group by file first
└ ${file_id}/ ← NEW: File-specific review data
└ pages/
└ ${page_number}/
└ ${icc_book_id}/
├ ${icc_chapter_content_id}/
| └ applicability-report.json -- Sections within a Chapter applicable to given page number
├ applicability-report.json -- Sections within the entire book applicable to given page number
├ cumulative-compliance-report.json -- A concatenated list of all available json reports across all sections
└ synthesized-compliance-report.md -- A summary report of all available .md files across all pages
Legacy Structure (Single-File Projects - Backward Compatibility):
projects/${project_id}/
└ review/
└ compliance/
└ pages/ ← LEGACY: Flat structure
└ ${page_number}/
└ ${icc_book_id}/
├ ${icc_chapter_content_id}/
└ ...
compliance/pages/${page_number}/${icc_book_id}/${icc_chapter_content_id}/applicability-report.jsonis generated byCodeApplicabilityInspector.determineSectionsApplicableToPage(pageNumber, iccBookId, chapter)method.
compliance/pages/${page_number}/${icc_book_id}/applicability-report.jsonis generated byCodeApplicabilityInspector.determineSectionsApplicableToPage(pageNumber, iccBookId)method.
compliance/pages/${page_number}/${icc_book_id}/cumulative-compliance-report.jsonis generated byCodeComplianceInspector.generateComplianceReportsForPage(pageNumber, iccBookApiId, filterFunction)method
ICC Book Section Grouping (Updated for Issue #167 - Multi-File Support)
Modern Structure (Multi-File Projects):
projects/${project_id}/
└ review/
└ compliance/
└ code/
└ ${icc_book_id}/
└ ${icc_section_id}/
├ files/ ← NEW: Group by file first
| └ ${file_id}/ ← NEW: File-specific compliance data
| └ pages/
| └ ${page_number}/
| ├ compliance-report.json -- A report of compliance for a given section to a given page
| └ compliance-statement.md -- A redundant extract of the json field available for readability
├ synthesized-compliance-report.md -- A summary report of all available .md files across all pages
└ cumulative-compliance-report.json -- A concatenated list of all available json reports across all pages
Legacy Structure (Single-File Projects - Backward Compatibility):
projects/${project_id}/
└ review/
└ compliance/
└ code/
└ ${icc_book_id}/
└ ${icc_section_id}/
├ pages/ ← LEGACY: Flat structure
| └ ${page_number}/
| ├ compliance-report.json
| └ compliance-statement.md
└ ...
Troubleshooting
Cloud Run Logs
Use the get-cloud-run-logs.sh script to extract and preprocess Cloud Run logs efficiently:
# Get logs from the last 30 minutes
./get-cloud-run-logs.sh --env=dev --minutes=30
# Get logs using local time (uses system timezone)
./get-cloud-run-logs.sh --env=dev --from='2025-01-17 06:30:00' --to='2025-01-17 07:30:00'
# Get logs with specific timezone
./get-cloud-run-logs.sh --env=dev --from='2025-01-17 06:30:00' --timezone='America/Los_Angeles'
# Get logs using UTC timestamps (traditional way)
./get-cloud-run-logs.sh --env=dev --from=2025-01-17T14:30:00Z --to=2025-01-17T15:00:00Z
# Get logs from specific time for 60 minutes
./get-cloud-run-logs.sh --env=dev --from='2025-01-17 06:30:00' --minutes=60
Automatic Timezone Conversion: The script now handles timezone conversion automatically! You can use:
- Local time format:
'2025-01-17 06:30:00'(uses system timezone or--timezone) - UTC format:
2025-01-17T14:30:00Z(passed through as-is) - Timezone offset:
2025-01-17T06:30:00-08:00(converted automatically) - Date only:
'2025-01-17'(assumes 00:00:00 in specified timezone)
Supported Timezones:
America/Los_Angeles(Pacific Time)America/New_York(Eastern Time)America/Chicago(Central Time)UTC(Coordinated Universal Time)- Any standard IANA timezone identifier
Examples:
# Pacific Time examples
./get-cloud-run-logs.sh --env=dev --from='2025-01-17 06:30:00' --timezone='America/Los_Angeles'
# Eastern Time examples
./get-cloud-run-logs.sh --env=dev --from='2025-01-17 09:30:00' --timezone='America/New_York'
# System timezone (no --timezone needed)
./get-cloud-run-logs.sh --env=dev --from='2025-01-17 06:30:00'
The script automatically:
- Downloads raw JSON logs to
logs/cloud-run-<env>.YYYY-MM-DD.HH:MM-YYYY-MM-DD.HH:MM.json - Extracts text payloads to
logs/cloud-run-<env>.YYYY-MM-DD.HH:MM-YYYY-MM-DD.HH:MM.log - Works on both macOS and Linux
- Supports dev, stg, and prod environments
- Requires
gcloudCLI andjqto be installed
Manual gcloud commands (for reference):
Click to expand manual gcloud logging commands
For macOS (BSD date):
# Last N minutes (replace N with desired number)
MINUTES_AGO=17
START_TIME=$(date -u -v-${MINUTES_AGO}M +"%Y-%m-%dT%H:%M:%SZ")
gcloud logging read \
"resource.type=\"cloud_run_revision\" AND
resource.labels.service_name=\"construction-code-expert-dev\" AND
resource.labels.location=\"us-central1\" AND
severity>=DEFAULT AND
timestamp>=\"$START_TIME\"" \
--format=json \
--project=construction-code-expert-dev \
--limit=1000
# Alternative one-liner:
gcloud logging read \
"resource.type=\"cloud_run_revision\" AND resource.labels.service_name=\"construction-code-expert-dev\" AND resource.labels.location=\"us-central1\" AND severity>=DEFAULT AND timestamp>=\"$(date -u -v-30M +%Y-%m-%dT%H:%M:%SZ)\"" \
--format=json \
--project=construction-code-expert-dev \
--limit=1000
# To save to a timestamped file:
gcloud logging read ... > cloud-run-logs-$(date +%Y%m%d-%H%M%S).json
For Linux (GNU date):
# Last N minutes (replace N with desired number)
MINUTES_AGO=30
START_TIME=$(date -u -d "$MINUTES_AGO minutes ago" +"%Y-%m-%dT%H:%M:%SZ")
gcloud logging read \
"resource.type=\"cloud_run_revision\" AND
resource.labels.service_name=\"construction-code-expert-dev\" AND
resource.labels.location=\"us-central1\" AND
severity>=DEFAULT AND
timestamp>=\"$START_TIME\"" \
--format=json \
--project=construction-code-expert-dev \
--limit=1000
# Alternative one-liner:
gcloud logging read \
"resource.type=\"cloud_run_revision\" AND resource.labels.service_name=\"construction-code-expert-dev\" AND resource.labels.location=\"us-central1\" AND severity>=DEFAULT AND timestamp>=\"$(date -u -d '30 minutes ago' +%Y-%m-%dT%H:%M:%SZ)\"" \
--format=json \
--project=construction-code-expert-dev \
--limit=1000
# To save to a timestamped file:
gcloud logging read ... > cloud-run-logs-$(date +%Y%m%d-%H%M%S).json
Querying Cloud Run Job Logs
While the get-cloud-run-logs.sh script is useful for general log extraction, you can directly query logs for specific Cloud Run Job executions using gcloud logging read for more targeted debugging. This is especially helpful when you have a specific task_id or execution_name.
🚀 Quick Task Debugging (Recommended):
Use the dedicated script for the most common debugging scenario - resolving a task ID to its Cloud Run Job logs:
# Debug a specific task (auto-detects job type)
./cli/sdlc/utils/resolve-task-to-logs.sh test 7c8e5d5d-f7aa-4eb6-9df5-88a310bc7935
# Show only error messages
./cli/sdlc/utils/resolve-task-to-logs.sh test 7c8e5d5d-f7aa-4eb6-9df5-88a310bc7935 --errors-only
# Specify job type explicitly
./cli/sdlc/utils/resolve-task-to-logs.sh test 7c8e5d5d-f7aa-4eb6-9df5-88a310bc7935 --job-type compliance-report
# Show raw JSON output
./cli/sdlc/utils/resolve-task-to-logs.sh test 7c8e5d5d-f7aa-4eb6-9df5-88a310bc7935 --raw-json
Manual Query Methods:
For more advanced debugging or when you need custom filtering:
Basic Query by Job Name:
To get all logs for a specific job in the test environment:
# Make sure you've sourced the environment variables first
source env/test/setvars.sh
gcloud logging read "resource.type=cloud_run_job AND resource.labels.job_name=plan-ingestion-processor-test" \
--project=$GCP_PROJECT_ID --limit=100
Filtering by Execution Name:
Each job run has a unique execution name. You can find this in the Cloud Console or in other log entries.
gcloud logging read "resource.type=cloud_run_job AND resource.labels.execution_name=plan-ingestion-processor-test-w4vvy9" \
--project=$GCP_PROJECT_ID
Searching for a Specific Task ID:
If a task fails, you can search for its unique ID within the log messages to find the exact logs related to its execution. This is the most effective way to debug a specific task and to resolve a task_id to a specific execution_name.
The execution_name can then be used for more targeted queries. You can parse the JSON output of the following command to find the run.googleapis.com/execution_name label.
# Replace with the actual task ID you are debugging
TASK_ID="19c7ad7f-f3dd-438a-b775-c2522f697ff8"
gcloud logging read "resource.type=cloud_run_job AND resource.labels.job_name=plan-ingestion-processor-test AND jsonPayload.message:\"${TASK_ID}\"" \
--project=$GCP_PROJECT_ID --format="json"
Finding Specific Log Messages:
You can also search for any text within the textPayload to find specific log lines, like the orientation detection message.
gcloud logging read "resource.type=cloud_run_job AND resource.labels.job_name=plan-ingestion-processor-test AND textPayload:\"detectPageRotationWithTesseract\"" \
--project=$GCP_PROJECT_ID
Resolving Task ID to Execution Name for Full Logs
To get the complete logs for a specific task, it's a two-step process. First, find the execution_name associated with your task_id, and then use that execution_name to retrieve all the logs for that job run.
Here is a generic script you can use for future debugging:
# 1. Set your environment and the Task ID you want to debug
source env/test/setvars.sh
TASK_ID="19c7ad7f-f3dd-438a-b775-c2522f697ff8" # Replace with your task ID
# 2. Resolve the Task ID to an Execution Name
# This command finds the first execution name associated with the task and stores it.
EXECUTION_NAME=$(gcloud logging read "resource.type=cloud_run_job AND resource.labels.job_name=plan-ingestion-processor-test AND textPayload:\"${TASK_ID}\"" \
--project=$GCP_PROJECT_ID --format="json" | jq -r '.[0].labels."run.googleapis.com/execution_name"')
# 3. Fetch all logs for that specific execution
if [ -z "$EXECUTION_NAME" ] || [ "$EXECUTION_NAME" == "null" ]; then
echo "Could not find execution name for Task ID: ${TASK_ID}"
else
echo "Found Execution Name: ${EXECUTION_NAME}"
echo "Fetching all logs for this execution..."
gcloud logging read "resource.type=cloud_run_job AND resource.labels.execution_name=${EXECUTION_NAME}" \
--project=$GCP_PROJECT_ID
fi
Debugging Firestore Database
Fetching Firestore Documents
When debugging task progress, user data, or any other Firestore-stored information, you can use the generic Firestore document fetcher utility script:
Location: cli/sdlc/utils/fetch-firestore-object.sh
Usage:
# Navigate to the utilities directory
cd /workspaces/construction-code-expert/cli/sdlc/utils
# Fetch any Firestore document
./fetch-firestore-object.sh <collection> <documentId> [project]
# Examples:
./fetch-firestore-object.sh tasks a7326bb7-9d32-442c-85bc-0807541f893d
./fetch-firestore-object.sh tasks a7326bb7-9d32-442c-85bc-0807541f893d construction-code-expert-test
./fetch-firestore-object.sh users john.doe@example.com
./fetch-firestore-object.sh projects my-project-id
Default Project: construction-code-expert-test (can be overridden with third parameter)
Output: The script provides both the raw Firestore document and simplified field view for easy debugging.
Analyzing Task Timing Data
For performance analysis and progress percentage optimization, use the specialized timing analysis script:
Location: cli/sdlc/utils/query-task-timing.sh
Usage:
# Analyze timing for any task type
./query-task-timing.sh <taskId> [project]
# Examples for different task types:
./query-task-timing.sh a7326bb7-9d32-442c-85bc-0807541f893d # Plan ingestion task
./query-task-timing.sh <code-applicability-task-id> # Code applicability task
./query-task-timing.sh <compliance-report-task-id> # Compliance report task
Output: The script provides detailed timing analysis including:
- Step-by-step duration breakdown
- Total execution time
- Proportional progress percentages based on actual timing
- Bottleneck identification
Use Cases:
- Optimizing progress percentage distribution (as done in #172)
- Identifying performance bottlenecks in task execution
- Debugging stuck or slow-running tasks
- Analyzing cost vs. performance trade-offs
Authentication Requirements
Both scripts use the AI SWE Agent service account credentials automatically:
- Service Account:
ai-swe-agent@construction-code-expert-test.iam.gserviceaccount.com - Credentials: Configured via
GOOGLE_APPLICATION_CREDENTIALSenvironment variable - Permissions: Firestore read access (
roles/datastore.user)
The scripts work out-of-the-box in the dev container environment without additional authentication setup.
Debugging Firebase RBAC Permissions
When investigating sharing settings, project access, or user permission issues, you can export and analyze the current Firebase custom claims structure. This is particularly useful for debugging project copy functionality, access control issues, or verifying that permissions are set correctly.
Export Firebase Custom Claims as YAML
The FirebaseRbacTest.testExportPermissionsAsYaml() test provides a reliable way to inspect the current Firebase custom claims structure for any environment.
For Development Environment
# Run the existing test (uses dev environment by default)
export JAVA_HOME=/usr/lib/jvm/temurin-23-jdk-arm64
mvn test -Dtest=FirebaseRbacTest#testExportPermissionsAsYaml
For Test Environment
# Run the test environment version
export JAVA_HOME=/usr/lib/jvm/temurin-23-jdk-arm64
mvn test -Dtest=FirebaseRbacTestEnvironmentDebug#testExportPermissionsAsYamlFromTestEnvironment
For Other Environments
To debug permissions in other environments (stg, prod), create a similar test class or modify the existing one to use the appropriate:
- Service account credentials file (
.secrets/credentials/construction-code-expert-{env}.*.json) - Project ID (
construction-code-expert-{env}) - Test:
https://test.m3.codeproof.app(Note: Use this domain because thefirebaseConfig.jsonlistscodeproof.appas the authorized auth domain) - Production:
https://m3.codeproof.app
[!TIP] Cache Busting: When testing in the browser (especially after a fresh deployment), the UI might be cached. To force a hard reload, append a unique query parameter to the URL, such as
?t=followed by a timestamp (e.g.,?t=1764470000). This forces the browser to fetch the fresh assets.
- Firebase app name (use unique names to avoid conflicts)
Understanding the Output
The export shows the Firebase custom claims structure in YAML format:
user1@example.com:
projects:
ProjectA: OWNER
ProjectB: READER
ProjectC: EDITOR
user2@example.com:
projects:
ProjectA: READER
ProjectD: OWNER
Common Debugging Scenarios
1. Project Copy Sharing Settings Issues
- Export custom claims to verify source project members exist
- Compare source vs. copied project member lists
- Check if users exist in Firebase (users without custom claims won't appear)
2. Access Control Debugging
- Verify user has expected role for a specific project
- Check if user exists in Firebase at all
- Identify orphaned permissions or missing users
3. Permission Synchronization Issues
- Compare Firebase custom claims with RBAC YAML files
- Verify that
setPermissionsFromYaml()worked correctly - Check for users who should have permissions but don't appear
Key Insights for Troubleshooting
- Missing Users: If a user doesn't appear in the export, they don't exist in Firebase and need to be created
- Empty Projects: Users with
projects: {}or noprojectskey have no project access - Role Validation: Check that roles are valid (
READER,EDITOR,OWNER) and properly capitalized - Project ID Matching: Ensure project IDs in Firebase exactly match the project IDs used in the application (case-sensitive)
Automated Export for CI/CD
You can also export permissions programmatically for backup or auditing:
# Export to timestamped file
mvn test -Dtest=FirebaseRbacTest#testExportPermissionsAsYaml
# Check target/ directory for exported-permissions-*.yaml files
This debugging technique has proven essential for diagnosing complex permission issues, especially when investigating why sharing settings aren't being copied correctly in project copy operations.
CLI Commands for RBAC Management
The cli/codeproof.sh interface provides convenient commands for managing RBAC permissions from the command line. These commands are particularly useful for administrative tasks, environment setup, and troubleshooting.
Export RBAC Configuration
Export current Firebase Custom Claims as YAML:
# Export to console (uses default service account key)
cli/codeproof.sh rbac get-rbac-yaml --environment dev
# Export to file
cli/codeproof.sh rbac get-rbac-yaml \
--environment dev \
--output-file current-rbac-permissions.yaml
# Explicit service account key (if needed)
cli/codeproof.sh rbac get-rbac-yaml \
--environment dev \
--service-account-key .secrets/credentials/construction-code-expert-dev-firebase-adminsdk-serviceAccountKey.json \
--output-file current-rbac-permissions.yaml
Import RBAC Configuration
Import RBAC permissions from YAML file:
# Dry run (validate without making changes, uses default service account key)
cli/codeproof.sh rbac set-rbac-yaml \
--environment dev \
--yaml-file env/dev/rbac.yaml \
--dry-run
# Apply changes with automatic backup (uses default service account key)
cli/codeproof.sh rbac set-rbac-yaml \
--environment dev \
--yaml-file env/dev/rbac.yaml
# Apply changes without backup
cli/codeproof.sh rbac set-rbac-yaml \
--environment dev \
--yaml-file env/dev/rbac.yaml \
--backup=false
# Explicit service account key (if needed)
cli/codeproof.sh rbac set-rbac-yaml \
--environment dev \
--yaml-file env/dev/rbac.yaml \
--service-account-key .secrets/credentials/construction-code-expert-dev-firebase-adminsdk-serviceAccountKey.json
Command Features
get-rbac-yaml Command:
- Exports current Firebase Custom Claims as YAML
- Supports all environments (dev, demo, test, prod)
- Can save to file or print to console
- Provides user count statistics
- Uses default service account key paths (no
-kparameter needed)
set-rbac-yaml Command:
- Imports RBAC permissions from YAML file
- Validates YAML structure before applying
- Supports dry-run mode for safe testing
- Creates automatic backups before changes
- Supports the extended YAML structure with admin roles
- Uses default service account key paths (no
-kparameter needed)
Admin Role Management
The CLI commands support the extended YAML structure for admin roles:
admin.user@example.com:
projects:
ProjectA: OWNER
admin:
roles:
role: root # Grants access to /admin UI
Supported Admin Roles:
root- Full administrative privileges, access to/adminUIrbac/admin- RBAC management privilegesrbac/editor- Can modify RBAC configurationrbac/reader- Can read RBAC configurationproject/creator- Can create new projects
Default Service Account Key Paths
The CLI commands automatically use the following default service account key paths:
- dev:
.secrets/credentials/construction-code-expert-dev-firebase-adminsdk-serviceAccountKey.json - demo:
.secrets/credentials/construction-code-expert-demo-firebase-adminsdk-serviceAccountKey.json - test:
.secrets/credentials/construction-code-expert-test-firebase-adminsdk-serviceAccountKey.json
You can override these defaults by explicitly providing the --service-account-key parameter if needed.
Integration Testing with Java
Test Annotations
The project uses type-safe JUnit 5 annotations for test categorization and filtering (Issue #47).
Available Annotations:
@IntegrationTest- Marks integration tests (require running server)@ExpensiveTest- Tracks cost and duration with variance@LlmTest- Detailed LLM cost tracking (aligns withCostAnalysisMetadataproto)@TestIssue- Links tests to issues (type-safe withIssueProviderenum)
Example:
@IntegrationTest
@ExpensiveTest(
estimatedDurationMs = 180000,
durationVarianceMs = 60000,
estimatedCostUsd = 0.0,
reason = "Integration test with real gRPC server"
)
@TestIssue(
provider = IssueProvider.GITHUB,
repository = "sanchos101/construction-code-expert",
issueId = "227"
)
public class ProjectMetadataManagementIntegrationTest { ... }
Running Tests
Regular Build (unit tests only, fast):
mvn test # Excludes integration and expensive tests
Integration Tests:
# Run all integration tests
mvn test -Dgroups=integration
# Run specific integration test
mvn test -Dtest=ProjectMetadataManagementIntegrationTest
Expensive/LLM Tests:
# Run expensive tests
mvn test -Dgroups=expensive
# Run LLM tests only
mvn test -Dgroups=llm
# Exclude LLM tests (save money)
mvn test -DexcludedGroups=llm
Example Integration Test
See ProjectMetadataManagementIntegrationTest.java for a complete example:
- Uses JUnit 5 with
@Orderfor sequential execution - Creates authenticated gRPC stubs
- Tests against local or remote server
- Includes setup, test steps, and cleanup
Location: src/test/java/org/codetricks/construction/code/assistant/service/ProjectMetadataManagementIntegrationTest.java
Test Annotation Details
Location: src/test/java/org/codetricks/construction/code/assistant/test/
Documentation: See package-info.java in the test annotations package
Frontend Development Best Practices
Component Architecture & CSS Standards (TLDR)
Based on our Admin UI implementation and component refactoring experience:
Key Patterns:
- ✅ Component Inheritance - Use
BaseTopAppBarComponentpattern for shared UI elements - ✅ Content Projection - Use named slots (
[slot="content"]) for flexible APIs - ✅ Avoid
!important- Use:hostselector and proper CSS specificity instead - ✅ Material Design Integration - Understand subscript wrapper purpose, avoid card wrapping form fields
- ✅ Loading State Management - Initialize with
loading = trueto show progress immediately - ✅ Route Detection - Use
startWith()and explicit boolean checks to prevent UI flicker
Architecture Example:
BaseTopAppBarComponent (shared foundation)
├── AdminTopAppBarComponent (admin features)
├── ProjectTopAppBarComponent (project features)
└── LandingTopAppBarComponent (minimal landing)
📖 For detailed examples, code patterns, and comprehensive best practices, see UI Best Practices