Files
aitbc/infra
oib 07f3a87328 docs: restructure website, optimize HTML, gitignore private files
Website docs (website/docs/):
- Delete 6 stale -md.html duplicates
- Rename docs-clients/miners/developers → clients/miners/developers.html
- Unify header/nav across all 15 pages (new .site-header pattern)
- Fix 34 dead href=# links with real targets
- Upgrade Font Awesome v4→v6 in index.html
- Replace search stub with live client-side search (15-page index)
- Extract all inline CSS into shared docs.css (+630 lines)
- Extract inline theme JS into shared theme.js
- Strip inline style= attributes from 10+ pages
- Add .announce-banner, .source-links, .search-results CSS classes
- Add Markdown Source links to clients/miners/developers pages
- Update components.html title to Architecture & Components
- Move browser-wallet.html to website/wallet/, leave redirect
- Update all dates to February 2026

Website root:
- Delete 6 root-level duplicate HTML files (160KB saved)
- Rewire index.html and 404.html links

Root README.md:
- Fix 8 broken doc links to new numbered folder structure
- Update copyright to 2026

.gitignore + .example files:
- Gitignore private files: .aitbc.yaml, .env, deploy scripts,
  GPU scripts, service scripts, infra configs, .windsurf/, website README
- Create 7 .example files for GitHub users with sanitized templates
- Untrack 47 previously committed private files

Live server:
- Push all website files to aitbc-cascade:/var/www/html/
- Clean stale admin.html and index.nginx-debian.html
2026-02-13 23:18:52 +01:00
..
```
2026-01-24 18:34:37 +01:00

AITBC Infrastructure Templates

This directory contains Terraform and Helm templates for deploying AITBC services across dev, staging, and production environments.

Directory Structure

infra/
├── terraform/                 # Infrastructure as Code
│   ├── modules/              # Reusable Terraform modules
│   │   └── kubernetes/       # EKS cluster module
│   └── environments/         # Environment-specific configurations
│       ├── dev/
│       ├── staging/
│       └── prod/
└── helm/                     # Helm Charts
    ├── charts/               # Application charts
    │   ├── coordinator/      # Coordinator API chart
    │   ├── blockchain-node/  # Blockchain node chart
    │   └── monitoring/       # Monitoring stack (Prometheus, Grafana)
    └── values/               # Environment-specific values
        ├── dev.yaml
        ├── staging.yaml
        └── prod.yaml

Quick Start

Prerequisites

  • Terraform >= 1.0
  • Helm >= 3.0
  • kubectl configured for your cluster
  • AWS CLI configured (for EKS)

Deploy Development Environment

  1. Provision Infrastructure with Terraform:

    cd infra/terraform/environments/dev
    terraform init
    terraform apply
    
  2. Configure kubectl:

    aws eks update-kubeconfig --name aitbc-dev --region us-west-2
    
  3. Deploy Applications with Helm:

    # Add required Helm repositories
    helm repo add bitnami https://charts.bitnami.com/bitnami
    helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
    helm repo add grafana https://grafana.github.io/helm-charts
    helm repo update
    
    # Deploy monitoring stack
    helm install monitoring ../../helm/charts/monitoring -f ../../helm/values/dev.yaml
    
    # Deploy coordinator API
    helm install coordinator ../../helm/charts/coordinator -f ../../helm/values/dev.yaml
    

Environment Configurations

Development

  • 1 replica per service
  • Minimal resource allocation
  • Public EKS endpoint enabled
  • 7-day metrics retention

Staging

  • 2-3 replicas per service
  • Moderate resource allocation
  • Autoscaling enabled
  • 30-day metrics retention
  • TLS with staging certificates

Production

  • 3+ replicas per service
  • High resource allocation
  • Full autoscaling configuration
  • 90-day metrics retention
  • TLS with production certificates
  • Network policies enabled
  • Backup configuration enabled

Monitoring

The monitoring stack includes:

  • Prometheus: Metrics collection and storage
  • Grafana: Visualization dashboards
  • AlertManager: Alert routing and notification

Access Grafana:

kubectl port-forward svc/monitoring-grafana 3000:3000
# Open http://localhost:3000
# Default credentials: admin/admin (check values files for environment-specific passwords)

Scaling Guidelines

Based on benchmark results (apps/blockchain-node/scripts/benchmark_throughput.py):

  • Coordinator API: Scale horizontally at ~500 TPS per node
  • Blockchain Node: Scale horizontally at ~1000 TPS per node
  • Wallet Daemon: Scale based on concurrent users

Security Considerations

  • Private subnets for all application workloads
  • Network policies restrict traffic between services
  • Secrets managed via Kubernetes Secrets
  • TLS termination at ingress level
  • Pod Security Policies enforced in production

Backup and Recovery

  • Automated daily backups of PostgreSQL databases
  • EBS snapshots for persistent volumes
  • Cross-region replication for production data
  • Restore procedures documented in runbooks

Cost Optimization

  • Use Spot instances for non-critical workloads
  • Implement cluster autoscaling
  • Right-size resources based on metrics
  • Schedule non-production environments to run only during business hours

Troubleshooting

Common issues and solutions:

  1. Helm chart fails to install:

    • Check if all dependencies are added
    • Verify kubectl context is correct
    • Review values files for syntax errors
  2. Prometheus not scraping metrics:

    • Verify ServiceMonitor CRDs are installed
    • Check service annotations
    • Review network policies
  3. High memory usage:

    • Review resource limits in values files
    • Check for memory leaks in applications
    • Consider increasing node size

Contributing

When adding new services:

  1. Create a new Helm chart in helm/charts/
  2. Add environment-specific values in helm/values/
  3. Update monitoring configuration to include new service metrics
  4. Document any special requirements in this README