feat: add marketplace metrics, privacy features, and service registry endpoints
- Add Prometheus metrics for marketplace API throughput and error rates with new dashboard panels - Implement confidential transaction models with encryption support and access control - Add key management system with registration, rotation, and audit logging - Create services and registry routers for service discovery and management - Integrate ZK proof generation for privacy-preserving receipts - Add metrics instru
This commit is contained in:
115
docs/.github/workflows/deploy-docs.yml
vendored
Normal file
115
docs/.github/workflows/deploy-docs.yml
vendored
Normal file
@ -0,0 +1,115 @@
|
||||
name: Deploy Documentation
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [ main, develop ]
|
||||
paths: [ 'docs/**' ]
|
||||
pull_request:
|
||||
branches: [ main ]
|
||||
paths: [ 'docs/**' ]
|
||||
workflow_dispatch:
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
pages: write
|
||||
id-token: write
|
||||
|
||||
concurrency:
|
||||
group: "pages"
|
||||
cancel-in-progress: false
|
||||
|
||||
jobs:
|
||||
build:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Setup Python
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: '3.11'
|
||||
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
pip install -r docs/requirements.txt
|
||||
|
||||
- name: Generate OpenAPI specs
|
||||
run: |
|
||||
cd docs
|
||||
python scripts/generate_openapi.py
|
||||
|
||||
- name: Build documentation
|
||||
run: |
|
||||
cd docs
|
||||
mkdocs build --strict
|
||||
|
||||
- name: Upload artifact
|
||||
uses: actions/upload-pages-artifact@v2
|
||||
with:
|
||||
path: docs/site
|
||||
|
||||
deploy:
|
||||
environment:
|
||||
name: github-pages
|
||||
url: ${{ steps.deployment.outputs.page_url }}
|
||||
runs-on: ubuntu-latest
|
||||
needs: build
|
||||
if: github.ref == 'refs/heads/main'
|
||||
steps:
|
||||
- name: Deploy to GitHub Pages
|
||||
id: deployment
|
||||
uses: actions/deploy-pages@v2
|
||||
|
||||
# Deploy to staging for develop branch
|
||||
deploy-staging:
|
||||
runs-on: ubuntu-latest
|
||||
needs: build
|
||||
if: github.ref == 'refs/heads/develop'
|
||||
steps:
|
||||
- name: Deploy to Staging
|
||||
uses: peaceiris/actions-gh-pages@v3
|
||||
with:
|
||||
github_token: ${{ secrets.GITHUB_TOKEN }}
|
||||
publish_dir: ./docs/site
|
||||
destination_dir: staging
|
||||
user_name: github-actions[bot]
|
||||
user_email: github-actions[bot]@users.noreply.github.com
|
||||
|
||||
# Deploy to production S3
|
||||
deploy-production:
|
||||
runs-on: ubuntu-latest
|
||||
needs: build
|
||||
if: github.ref == 'refs/heads/main'
|
||||
environment: production
|
||||
steps:
|
||||
- name: Configure AWS Credentials
|
||||
uses: aws-actions/configure-aws-credentials@v2
|
||||
with:
|
||||
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
|
||||
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
|
||||
aws-region: us-east-1
|
||||
|
||||
- name: Deploy to S3
|
||||
run: |
|
||||
aws s3 sync docs/site/ s3://docs.aitbc.io/ --delete
|
||||
aws cloudfront create-invalidation --distribution-id ${{ secrets.CLOUDFRONT_DISTRIBUTION_ID }} --paths "/*"
|
||||
|
||||
# Notify on deployment
|
||||
notify:
|
||||
runs-on: ubuntu-latest
|
||||
needs: [deploy, deploy-production]
|
||||
if: always()
|
||||
steps:
|
||||
- name: Notify Discord
|
||||
uses: rjstone/discord-webhook-notify@v1
|
||||
with:
|
||||
severity: info
|
||||
text: "Documentation deployment completed"
|
||||
description: |
|
||||
Build: ${{ needs.build.result }}
|
||||
Deploy: ${{ needs.deploy.result }}
|
||||
Production: ${{ needs.deploy-production.result }}
|
||||
webhookUrl: ${{ secrets.DISCORD_WEBHOOK }}
|
||||
87
docs/.pages
Normal file
87
docs/.pages
Normal file
@ -0,0 +1,87 @@
|
||||
# .pages configuration for awesome-pages plugin
|
||||
|
||||
home: index.md
|
||||
format: standard
|
||||
ordering:
|
||||
asc: title
|
||||
|
||||
sections:
|
||||
- title: Getting Started
|
||||
icon: material/rocket-launch
|
||||
children:
|
||||
- getting-started/introduction.md
|
||||
- getting-started/quickstart.md
|
||||
- getting-started/installation.md
|
||||
- getting-started/architecture.md
|
||||
|
||||
- title: User Guide
|
||||
icon: material/account-group
|
||||
children:
|
||||
- user-guide/overview.md
|
||||
- user-guide/creating-jobs.md
|
||||
- user-guide/marketplace.md
|
||||
- user-guide/explorer.md
|
||||
- user-guide/wallet-management.md
|
||||
|
||||
- title: Developer Guide
|
||||
icon: material/code-tags
|
||||
children:
|
||||
- developer-guide/overview.md
|
||||
- developer-guide/setup.md
|
||||
- developer-guide/api-authentication.md
|
||||
- title: SDKs
|
||||
icon: material/package-variant
|
||||
children:
|
||||
- developer-guide/sdks/python.md
|
||||
- developer-guide/sdks/javascript.md
|
||||
- developer-guide/examples.md
|
||||
- developer-guide/contributing.md
|
||||
|
||||
- title: API Reference
|
||||
icon: material/api
|
||||
children:
|
||||
- title: Coordinator API
|
||||
icon: material/server
|
||||
children:
|
||||
- api/coordinator/overview.md
|
||||
- api/coordinator/authentication.md
|
||||
- api/coordinator/endpoints.md
|
||||
- api/coordinator/openapi.md
|
||||
- title: Blockchain Node API
|
||||
icon: material/link-variant
|
||||
children:
|
||||
- api/blockchain/overview.md
|
||||
- api/blockchain/websocket.md
|
||||
- api/blockchain/jsonrpc.md
|
||||
- api/blockchain/openapi.md
|
||||
- title: Wallet Daemon API
|
||||
icon: material/wallet
|
||||
children:
|
||||
- api/wallet/overview.md
|
||||
- api/wallet/endpoints.md
|
||||
- api/wallet/openapi.md
|
||||
|
||||
- title: Operations
|
||||
icon: material/cog
|
||||
children:
|
||||
- operations/deployment.md
|
||||
- operations/monitoring.md
|
||||
- operations/security.md
|
||||
- operations/backup-restore.md
|
||||
- operations/troubleshooting.md
|
||||
|
||||
- title: Tutorials
|
||||
icon: material/school
|
||||
children:
|
||||
- tutorials/building-dapp.md
|
||||
- tutorials/mining-setup.md
|
||||
- tutorials/running-node.md
|
||||
- tutorials/integration-examples.md
|
||||
|
||||
- title: Resources
|
||||
icon: material/information
|
||||
children:
|
||||
- resources/glossary.md
|
||||
- resources/faq.md
|
||||
- resources/support.md
|
||||
- resources/changelog.md
|
||||
@ -1,10 +1,12 @@
|
||||
# Coordinator API – Task Breakdown
|
||||
|
||||
## Status (2025-09-27)
|
||||
## Status (2025-12-22)
|
||||
|
||||
- **Stage 1 delivery**: Core FastAPI service, persistence, job lifecycle, and miner flows implemented under `apps/coordinator-api/`. Receipt signing now includes optional coordinator attestations with history retrieval endpoints.
|
||||
- **Testing & tooling**: Pytest suites cover job scheduling, miner flows, and receipt verification; the shared CI script `scripts/ci/run_python_tests.sh` executes these tests in GitHub Actions.
|
||||
- **Documentation**: `docs/run.md` and `apps/coordinator-api/README.md` describe configuration for `RECEIPT_SIGNING_KEY_HEX` and `RECEIPT_ATTESTATION_KEY_HEX` plus the receipt history API.
|
||||
- **Service APIs**: Implemented specific service endpoints for common GPU workloads (Whisper, Stable Diffusion, LLM inference, FFmpeg, Blender) with typed schemas and validation.
|
||||
- **Service Registry**: Created dynamic service registry framework supporting 30+ GPU services across 6 categories (AI/ML, Media Processing, Scientific Computing, Data Analytics, Gaming, Development Tools).
|
||||
|
||||
## Stage 1 (MVP)
|
||||
|
||||
@ -27,6 +29,17 @@
|
||||
- Build `/v1/jobs` endpoints (submit, get status, get result, cancel) with idempotency support.
|
||||
- Build `/v1/miners` endpoints (register, heartbeat, poll, result, fail, drain).
|
||||
- Build `/v1/admin` endpoints (stats, job listing, miner listing) with admin auth.
|
||||
- Build `/v1/services` endpoints for specific GPU workloads:
|
||||
- `/v1/services/whisper/transcribe` - Audio transcription
|
||||
- `/v1/services/stable-diffusion/generate` - Image generation
|
||||
- `/v1/services/llm/inference` - Text generation
|
||||
- `/v1/services/ffmpeg/transcode` - Video transcoding
|
||||
- `/v1/services/blender/render` - 3D rendering
|
||||
- Build `/v1/registry` endpoints for dynamic service management:
|
||||
- `/v1/registry/services` - List all available services
|
||||
- `/v1/registry/services/{id}` - Get service definition
|
||||
- `/v1/registry/services/{id}/schema` - Get JSON schema
|
||||
- `/v1/registry/services/{id}/requirements` - Get hardware requirements
|
||||
- Optionally add WebSocket endpoints under `ws/` for streaming updates.
|
||||
- **Receipts & Attestations**
|
||||
- ✅ Persist signed receipts (latest + history), expose `/v1/jobs/{job_id}/receipt(s)` endpoints, and attach optional coordinator attestations when `RECEIPT_ATTESTATION_KEY_HEX` is configured.
|
||||
|
||||
77
docs/developer/api-authentication.md
Normal file
77
docs/developer/api-authentication.md
Normal file
@ -0,0 +1,77 @@
|
||||
---
|
||||
title: API Authentication
|
||||
description: Understanding and implementing API authentication
|
||||
---
|
||||
|
||||
# API Authentication
|
||||
|
||||
All AITBC API endpoints require authentication using API keys.
|
||||
|
||||
## Getting API Keys
|
||||
|
||||
1. Visit the [AITBC Dashboard](https://dashboard.aitbc.io)
|
||||
2. Create an account or sign in
|
||||
3. Navigate to API Keys section
|
||||
4. Generate a new API key
|
||||
|
||||
## Using API Keys
|
||||
|
||||
### HTTP Header
|
||||
```http
|
||||
X-API-Key: your_api_key_here
|
||||
```
|
||||
|
||||
### Environment Variable
|
||||
```bash
|
||||
export AITBC_API_KEY="your_api_key_here"
|
||||
```
|
||||
|
||||
### SDK Configuration
|
||||
```python
|
||||
from aitbc import AITBCClient
|
||||
|
||||
client = AITBCClient(api_key="your_api_key")
|
||||
```
|
||||
|
||||
## Security Best Practices
|
||||
|
||||
- Never commit API keys to version control
|
||||
- Use environment variables in production
|
||||
- Rotate keys regularly
|
||||
- Use different keys for different environments
|
||||
- Monitor API key usage
|
||||
|
||||
## Rate Limits
|
||||
|
||||
API requests are rate-limited based on your plan:
|
||||
- Free: 60 requests/minute
|
||||
- Pro: 600 requests/minute
|
||||
- Enterprise: 6000 requests/minute
|
||||
|
||||
## Error Handling
|
||||
|
||||
```python
|
||||
from aitbc.exceptions import AuthenticationError
|
||||
|
||||
try:
|
||||
client.jobs.create({...})
|
||||
except AuthenticationError:
|
||||
print("Invalid API key")
|
||||
```
|
||||
|
||||
## Key Management
|
||||
|
||||
### View Your Keys
|
||||
```bash
|
||||
aitbc api-keys list
|
||||
```
|
||||
|
||||
### Revoke a Key
|
||||
```bash
|
||||
aitbc api-keys revoke <key_id>
|
||||
```
|
||||
|
||||
### Regenerate a Key
|
||||
```bash
|
||||
aitbc api-keys regenerate <key_id>
|
||||
```
|
||||
111
docs/developer/api/api/coordinator/authentication.md
Normal file
111
docs/developer/api/api/coordinator/authentication.md
Normal file
@ -0,0 +1,111 @@
|
||||
---
|
||||
title: API Authentication
|
||||
description: Understanding authentication for the Coordinator API
|
||||
---
|
||||
|
||||
# API Authentication
|
||||
|
||||
All Coordinator API endpoints require authentication using API keys.
|
||||
|
||||
## Getting Started
|
||||
|
||||
1. Sign up at [AITBC Dashboard](https://dashboard.aitbc.io)
|
||||
2. Generate an API key
|
||||
3. Include the key in your requests
|
||||
|
||||
## Authentication Methods
|
||||
|
||||
### HTTP Header (Recommended)
|
||||
```http
|
||||
X-API-Key: your_api_key_here
|
||||
```
|
||||
|
||||
### Query Parameter
|
||||
```http
|
||||
GET /v1/jobs?api_key=your_api_key_here
|
||||
```
|
||||
|
||||
## Example Requests
|
||||
|
||||
### cURL
|
||||
```bash
|
||||
curl -X GET https://api.aitbc.io/v1/jobs \
|
||||
-H "X-API-Key: your_api_key_here"
|
||||
```
|
||||
|
||||
### Python
|
||||
```python
|
||||
import requests
|
||||
|
||||
headers = {
|
||||
"X-API-Key": "your_api_key_here"
|
||||
}
|
||||
|
||||
response = requests.get(
|
||||
"https://api.aitbc.io/v1/jobs",
|
||||
headers=headers
|
||||
)
|
||||
```
|
||||
|
||||
### JavaScript
|
||||
```javascript
|
||||
const headers = {
|
||||
"X-API-Key": "your_api_key_here"
|
||||
};
|
||||
|
||||
fetch("https://api.aitbc.io/v1/jobs", {
|
||||
headers: headers
|
||||
})
|
||||
.then(response => response.json())
|
||||
.then(data => console.log(data));
|
||||
```
|
||||
|
||||
## Security Best Practices
|
||||
|
||||
- Never expose API keys in client-side code
|
||||
- Use environment variables in production
|
||||
- Rotate keys regularly
|
||||
- Monitor API usage
|
||||
- Use HTTPS for all requests
|
||||
|
||||
## Rate Limits
|
||||
|
||||
API requests are rate-limited based on your plan:
|
||||
- Free: 60 requests/minute
|
||||
- Pro: 600 requests/minute
|
||||
- Enterprise: 6000 requests/minute
|
||||
|
||||
Rate limit headers are included in responses:
|
||||
```http
|
||||
X-RateLimit-Limit: 60
|
||||
X-RateLimit-Remaining: 59
|
||||
X-RateLimit-Reset: 1640995200
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
```json
|
||||
{
|
||||
"error": {
|
||||
"code": "INVALID_API_KEY",
|
||||
"message": "The provided API key is invalid"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Key Management
|
||||
|
||||
### View Your Keys
|
||||
Visit the [Dashboard](https://dashboard.aitbc.io/api-keys)
|
||||
|
||||
### Revoke a Key
|
||||
```bash
|
||||
curl -X DELETE https://api.aitbc.io/v1/api-keys/{key_id} \
|
||||
-H "X-API-Key: your_master_key"
|
||||
```
|
||||
|
||||
### Regenerate a Key
|
||||
```bash
|
||||
curl -X POST https://api.aitbc.io/v1/api-keys/{key_id}/regenerate \
|
||||
-H "X-API-Key: your_master_key"
|
||||
```
|
||||
575
docs/developer/api/api/coordinator/endpoints.md
Normal file
575
docs/developer/api/api/coordinator/endpoints.md
Normal file
@ -0,0 +1,575 @@
|
||||
---
|
||||
title: API Endpoints
|
||||
description: Complete list of Coordinator API endpoints
|
||||
---
|
||||
|
||||
# API Endpoints
|
||||
|
||||
## Jobs
|
||||
|
||||
### Create Job
|
||||
```http
|
||||
POST /v1/jobs
|
||||
```
|
||||
|
||||
Create a new AI job.
|
||||
|
||||
**Request Body:**
|
||||
```json
|
||||
{
|
||||
"name": "image-classification",
|
||||
"type": "ai-inference",
|
||||
"model": {
|
||||
"type": "python",
|
||||
"entrypoint": "model.py",
|
||||
"requirements": ["numpy", "torch"]
|
||||
},
|
||||
"input": {
|
||||
"type": "image",
|
||||
"format": "jpeg"
|
||||
},
|
||||
"output": {
|
||||
"type": "json"
|
||||
},
|
||||
"resources": {
|
||||
"cpu": "1000m",
|
||||
"memory": "2Gi",
|
||||
"gpu": "1"
|
||||
},
|
||||
"pricing": {
|
||||
"max_cost": "0.10"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"job_id": "job_1234567890",
|
||||
"status": "submitted",
|
||||
"created_at": "2024-01-01T12:00:00Z",
|
||||
"estimated_completion": "2024-01-01T12:05:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
### Get Job Status
|
||||
```http
|
||||
GET /v1/jobs/{job_id}
|
||||
```
|
||||
|
||||
Retrieve the current status of a job.
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"job_id": "job_1234567890",
|
||||
"status": "running",
|
||||
"progress": 75,
|
||||
"created_at": "2024-01-01T12:00:00Z",
|
||||
"started_at": "2024-01-01T12:01:00Z",
|
||||
"estimated_completion": "2024-01-01T12:05:00Z",
|
||||
"miner_id": "miner_1234567890"
|
||||
}
|
||||
```
|
||||
|
||||
### List Jobs
|
||||
```http
|
||||
GET /v1/jobs
|
||||
```
|
||||
|
||||
List all jobs with optional filtering.
|
||||
|
||||
**Query Parameters:**
|
||||
- `status` (string): Filter by status (submitted, running, completed, failed)
|
||||
- `type` (string): Filter by job type
|
||||
- `limit` (integer): Maximum number of jobs to return (default: 50)
|
||||
- `offset` (integer): Number of jobs to skip (default: 0)
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"jobs": [
|
||||
{
|
||||
"job_id": "job_1234567890",
|
||||
"status": "completed",
|
||||
"type": "ai-inference",
|
||||
"created_at": "2024-01-01T12:00:00Z"
|
||||
}
|
||||
],
|
||||
"total": 1,
|
||||
"limit": 50,
|
||||
"offset": 0
|
||||
}
|
||||
```
|
||||
|
||||
### Cancel Job
|
||||
```http
|
||||
DELETE /v1/jobs/{job_id}
|
||||
```
|
||||
|
||||
Cancel a running or submitted job.
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"job_id": "job_1234567890",
|
||||
"status": "cancelled",
|
||||
"cancelled_at": "2024-01-01T12:03:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
### Get Job Results
|
||||
```http
|
||||
GET /v1/jobs/{job_id}/results
|
||||
```
|
||||
|
||||
Retrieve the results of a completed job.
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"job_id": "job_1234567890",
|
||||
"status": "completed",
|
||||
"results": {
|
||||
"prediction": "cat",
|
||||
"confidence": 0.95,
|
||||
"processing_time": 1.23
|
||||
},
|
||||
"completed_at": "2024-01-01T12:04:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
## Marketplace
|
||||
|
||||
### Create Offer
|
||||
```http
|
||||
POST /v1/marketplace/offers
|
||||
```
|
||||
|
||||
Create a new marketplace offer for job execution.
|
||||
|
||||
**Request Body:**
|
||||
```json
|
||||
{
|
||||
"job_type": "image-classification",
|
||||
"price": "0.001",
|
||||
"max_jobs": 10,
|
||||
"requirements": {
|
||||
"min_gpu_memory": "4Gi",
|
||||
"min_cpu": "2000m"
|
||||
},
|
||||
"duration": 3600
|
||||
}
|
||||
```
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"offer_id": "offer_1234567890",
|
||||
"miner_id": "miner_1234567890",
|
||||
"status": "active",
|
||||
"created_at": "2024-01-01T12:00:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
### List Offers
|
||||
```http
|
||||
GET /v1/marketplace/offers
|
||||
```
|
||||
|
||||
List all active marketplace offers.
|
||||
|
||||
**Query Parameters:**
|
||||
- `job_type` (string): Filter by job type
|
||||
- `max_price` (string): Maximum price filter
|
||||
- `limit` (integer): Maximum number of offers (default: 50)
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"offers": [
|
||||
{
|
||||
"offer_id": "offer_1234567890",
|
||||
"miner_id": "miner_1234567890",
|
||||
"job_type": "image-classification",
|
||||
"price": "0.001",
|
||||
"reputation": 4.8
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Accept Offer
|
||||
```http
|
||||
POST /v1/marketplace/offers/{offer_id}/accept
|
||||
```
|
||||
|
||||
Accept a marketplace offer for job execution.
|
||||
|
||||
**Request Body:**
|
||||
```json
|
||||
{
|
||||
"job_id": "job_1234567890",
|
||||
"bid_price": "0.001"
|
||||
}
|
||||
```
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"transaction_id": "tx_1234567890",
|
||||
"status": "pending",
|
||||
"created_at": "2024-01-01T12:00:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
## Receipts
|
||||
|
||||
### Get Receipt
|
||||
```http
|
||||
GET /v1/receipts/{job_id}
|
||||
```
|
||||
|
||||
Retrieve the receipt for a completed job.
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"receipt_id": "receipt_1234567890",
|
||||
"job_id": "job_1234567890",
|
||||
"miner_id": "miner_1234567890",
|
||||
"signature": {
|
||||
"sig": "base64_signature",
|
||||
"public_key": "base64_public_key"
|
||||
},
|
||||
"attestations": [
|
||||
{
|
||||
"type": "completion",
|
||||
"timestamp": "2024-01-01T12:04:00Z",
|
||||
"signature": "base64_attestation"
|
||||
}
|
||||
],
|
||||
"created_at": "2024-01-01T12:04:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
### Verify Receipt
|
||||
```http
|
||||
POST /v1/receipts/verify
|
||||
```
|
||||
|
||||
Verify the authenticity of a receipt.
|
||||
|
||||
**Request Body:**
|
||||
```json
|
||||
{
|
||||
"receipt": {
|
||||
"receipt_id": "receipt_1234567890",
|
||||
"signature": {
|
||||
"sig": "base64_signature",
|
||||
"public_key": "base64_public_key"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"valid": true,
|
||||
"miner_signature_valid": true,
|
||||
"coordinator_attestations": 2,
|
||||
"verified_at": "2024-01-01T12:05:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
## Analytics
|
||||
|
||||
### Get Marketplace Stats
|
||||
```http
|
||||
GET /v1/marketplace/stats
|
||||
```
|
||||
|
||||
Retrieve marketplace statistics.
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"total_jobs": 10000,
|
||||
"active_jobs": 150,
|
||||
"completed_jobs": 9800,
|
||||
"failed_jobs": 50,
|
||||
"average_completion_time": 120.5,
|
||||
"total_volume": "1500.50",
|
||||
"active_miners": 500
|
||||
}
|
||||
```
|
||||
|
||||
### Get Miner Stats
|
||||
```http
|
||||
GET /v1/miners/{miner_id}/stats
|
||||
```
|
||||
|
||||
Retrieve statistics for a specific miner.
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"miner_id": "miner_1234567890",
|
||||
"reputation": 4.8,
|
||||
"total_jobs": 500,
|
||||
"success_rate": 0.98,
|
||||
"average_completion_time": 115.2,
|
||||
"total_earned": "125.50",
|
||||
"active_since": "2024-01-01T00:00:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
## Health
|
||||
|
||||
### Health Check
|
||||
```http
|
||||
GET /v1/health
|
||||
```
|
||||
|
||||
Check the health status of the coordinator service.
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"status": "ok",
|
||||
"version": "1.0.0",
|
||||
"environment": "production",
|
||||
"timestamp": "2024-01-01T12:00:00Z",
|
||||
"services": {
|
||||
"database": "healthy",
|
||||
"blockchain": "healthy",
|
||||
"marketplace": "healthy"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## WebSocket API
|
||||
|
||||
### Real-time Updates
|
||||
```
|
||||
WSS /ws
|
||||
```
|
||||
|
||||
Connect to receive real-time updates about jobs and marketplace events.
|
||||
|
||||
**Message Types:**
|
||||
- `job_update`: Job status changes
|
||||
- `marketplace_update`: New offers or transactions
|
||||
- `receipt_created`: New receipts generated
|
||||
|
||||
**Example Message:**
|
||||
```json
|
||||
{
|
||||
"type": "job_update",
|
||||
"data": {
|
||||
"job_id": "job_1234567890",
|
||||
"status": "completed",
|
||||
"timestamp": "2024-01-01T12:04:00Z"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Error Codes
|
||||
|
||||
| Code | Description | HTTP Status |
|
||||
|------|-------------|-------------|
|
||||
| `INVALID_JOB_TYPE` | Unsupported job type | 400 |
|
||||
| `INSUFFICIENT_BALANCE` | Not enough funds in wallet | 402 |
|
||||
| `JOB_NOT_FOUND` | Job does not exist | 404 |
|
||||
| `JOB_ALREADY_COMPLETED` | Cannot modify completed job | 409 |
|
||||
| `OFFER_NOT_AVAILABLE` | Offer is no longer available | 410 |
|
||||
| `RATE_LIMIT_EXCEEDED` | Too many requests | 429 |
|
||||
| `INTERNAL_ERROR` | Server error | 500 |
|
||||
|
||||
## SDK Examples
|
||||
|
||||
### Python
|
||||
```python
|
||||
from aitbc import AITBCClient
|
||||
|
||||
client = AITBCClient(api_key="your_key")
|
||||
|
||||
# Create a job
|
||||
job = client.jobs.create({
|
||||
"name": "my-job",
|
||||
"type": "ai-inference",
|
||||
...
|
||||
})
|
||||
|
||||
# Get results
|
||||
results = client.jobs.get_results(job["job_id"])
|
||||
```
|
||||
|
||||
### JavaScript
|
||||
```javascript
|
||||
import { AITBCClient } from '@aitbc/client';
|
||||
|
||||
const client = new AITBCClient({ apiKey: 'your_key' });
|
||||
|
||||
// Create a job
|
||||
const job = await client.jobs.create({
|
||||
name: 'my-job',
|
||||
type: 'ai-inference',
|
||||
...
|
||||
});
|
||||
|
||||
// Get results
|
||||
const results = await client.jobs.getResults(job.jobId);
|
||||
```
|
||||
|
||||
## Services
|
||||
|
||||
### Whisper Transcription
|
||||
```http
|
||||
POST /v1/services/whisper/transcribe
|
||||
```
|
||||
|
||||
Transcribe audio file using Whisper.
|
||||
|
||||
**Request Body:**
|
||||
```json
|
||||
{
|
||||
"audio_url": "https://example.com/audio.mp3",
|
||||
"model": "base",
|
||||
"language": "en",
|
||||
"task": "transcribe"
|
||||
}
|
||||
```
|
||||
|
||||
### Stable Diffusion Generation
|
||||
```http
|
||||
POST /v1/services/stable-diffusion/generate
|
||||
```
|
||||
|
||||
Generate images from text prompts.
|
||||
|
||||
**Request Body:**
|
||||
```json
|
||||
{
|
||||
"prompt": "A beautiful sunset over mountains",
|
||||
"model": "stable-diffusion-1.5",
|
||||
"size": "1024x1024",
|
||||
"num_images": 1,
|
||||
"steps": 20
|
||||
}
|
||||
```
|
||||
|
||||
### LLM Inference
|
||||
```http
|
||||
POST /v1/services/llm/inference
|
||||
```
|
||||
|
||||
Run inference on language models.
|
||||
|
||||
**Request Body:**
|
||||
```json
|
||||
{
|
||||
"model": "llama-7b",
|
||||
"prompt": "Explain quantum computing",
|
||||
"max_tokens": 256,
|
||||
"temperature": 0.7
|
||||
}
|
||||
```
|
||||
|
||||
### Video Transcoding
|
||||
```http
|
||||
POST /v1/services/ffmpeg/transcode
|
||||
```
|
||||
|
||||
Transcode video files.
|
||||
|
||||
**Request Body:**
|
||||
```json
|
||||
{
|
||||
"input_url": "https://example.com/video.mp4",
|
||||
"output_format": "mp4",
|
||||
"codec": "h264",
|
||||
"resolution": "1920x1080"
|
||||
}
|
||||
```
|
||||
|
||||
### 3D Rendering
|
||||
```http
|
||||
POST /v1/services/blender/render
|
||||
```
|
||||
|
||||
Render 3D scenes with Blender.
|
||||
|
||||
**Request Body:**
|
||||
```json
|
||||
{
|
||||
"blend_file_url": "https://example.com/scene.blend",
|
||||
"engine": "cycles",
|
||||
"resolution_x": 1920,
|
||||
"resolution_y": 1080,
|
||||
"samples": 128
|
||||
}
|
||||
```
|
||||
|
||||
## Service Registry
|
||||
|
||||
### List All Services
|
||||
```http
|
||||
GET /v1/registry/services
|
||||
```
|
||||
|
||||
List all available GPU services with optional filtering.
|
||||
|
||||
**Query Parameters:**
|
||||
- `category` (optional): Filter by service category
|
||||
- `search` (optional): Search by name, description, or tags
|
||||
|
||||
### Get Service Definition
|
||||
```http
|
||||
GET /v1/registry/services/{service_id}
|
||||
```
|
||||
|
||||
Get detailed definition for a specific service.
|
||||
|
||||
### Get Service Schema
|
||||
```http
|
||||
GET /v1/registry/services/{service_id}/schema
|
||||
```
|
||||
|
||||
Get JSON schema for service input parameters.
|
||||
|
||||
### Get Service Requirements
|
||||
```http
|
||||
GET /v1/registry/services/{service_id}/requirements
|
||||
```
|
||||
|
||||
Get hardware requirements for a service.
|
||||
|
||||
### Validate Service Request
|
||||
```http
|
||||
POST /v1/registry/services/validate
|
||||
```
|
||||
|
||||
Validate a service request against the service schema.
|
||||
|
||||
**Request Body:**
|
||||
```json
|
||||
{
|
||||
"service_id": "llm_inference",
|
||||
"request_data": {
|
||||
"model": "llama-7b",
|
||||
"prompt": "Hello world",
|
||||
"max_tokens": 256
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"valid": true,
|
||||
"errors": [],
|
||||
"warnings": []
|
||||
}
|
||||
```
|
||||
79
docs/developer/api/api/coordinator/openapi.md
Normal file
79
docs/developer/api/api/coordinator/openapi.md
Normal file
@ -0,0 +1,79 @@
|
||||
---
|
||||
title: OpenAPI Specification
|
||||
description: Complete OpenAPI specification for the Coordinator API
|
||||
---
|
||||
|
||||
# OpenAPI Specification
|
||||
|
||||
The complete OpenAPI 3.0 specification for the AITBC Coordinator API is available below.
|
||||
|
||||
## Interactive Documentation
|
||||
|
||||
- [Swagger UI](https://api.aitbc.io/docs) - Interactive API explorer
|
||||
- [ReDoc](https://api.aitbc.io/redoc) - Alternative documentation view
|
||||
|
||||
## Download Specification
|
||||
|
||||
- [JSON Format](openapi.json) - Raw OpenAPI JSON
|
||||
- [YAML Format](openapi.yaml) - OpenAPI YAML format
|
||||
|
||||
## Key Endpoints
|
||||
|
||||
### Jobs
|
||||
- `POST /v1/jobs` - Create a new job
|
||||
- `GET /v1/jobs/{job_id}` - Get job details
|
||||
- `GET /v1/jobs` - List jobs
|
||||
- `DELETE /v1/jobs/{job_id}` - Cancel job
|
||||
- `GET /v1/jobs/{job_id}/results` - Get job results
|
||||
|
||||
### Marketplace
|
||||
- `POST /v1/marketplace/offers` - Create offer
|
||||
- `GET /v1/marketplace/offers` - List offers
|
||||
- `POST /v1/marketplace/offers/{offer_id}/accept` - Accept offer
|
||||
|
||||
### Receipts
|
||||
- `GET /v1/receipts/{job_id}` - Get receipt
|
||||
- `POST /v1/receipts/verify` - Verify receipt
|
||||
|
||||
### Analytics
|
||||
- `GET /v1/marketplace/stats` - Get marketplace statistics
|
||||
- `GET /v1/miners/{miner_id}/stats` - Get miner statistics
|
||||
|
||||
## Authentication
|
||||
|
||||
All endpoints require authentication via the `X-API-Key` header.
|
||||
|
||||
## Rate Limits
|
||||
|
||||
API requests are rate-limited based on your subscription plan.
|
||||
|
||||
## WebSocket API
|
||||
|
||||
Real-time updates available at:
|
||||
- WebSocket: `wss://api.aitbc.io/ws`
|
||||
- Message types: job_update, marketplace_update, receipt_created
|
||||
|
||||
## Code Generation
|
||||
|
||||
Use the OpenAPI spec to generate client libraries:
|
||||
|
||||
```bash
|
||||
# OpenAPI Generator
|
||||
openapi-generator-cli generate -i openapi.json -g python -o ./client/
|
||||
|
||||
# Or use the online generator at https://openapi-generator.tech/
|
||||
```
|
||||
|
||||
## SDK Integration
|
||||
|
||||
The OpenAPI spec is integrated into our official SDKs:
|
||||
- [Python SDK](../../developer-guide/sdks/python.md)
|
||||
- [JavaScript SDK](../../developer-guide/sdks/javascript.md)
|
||||
|
||||
## Support
|
||||
|
||||
For API support:
|
||||
- 📖 [API Documentation](endpoints.md)
|
||||
- 🐛 [Report Issues](https://github.com/aitbc/issues)
|
||||
- 💬 [Discord](https://discord.gg/aitbc)
|
||||
- 📧 [api-support@aitbc.io](mailto:api-support@aitbc.io)
|
||||
140
docs/developer/api/api/coordinator/overview.md
Normal file
140
docs/developer/api/api/coordinator/overview.md
Normal file
@ -0,0 +1,140 @@
|
||||
---
|
||||
title: Coordinator API Overview
|
||||
description: Introduction to the AITBC Coordinator API
|
||||
---
|
||||
|
||||
# Coordinator API Overview
|
||||
|
||||
The Coordinator API is the central service of the AITBC platform, responsible for job management, marketplace operations, and coordination between various components.
|
||||
|
||||
## Base URL
|
||||
|
||||
```
|
||||
Production: https://api.aitbc.io
|
||||
Staging: https://staging-api.aitbc.io
|
||||
Development: http://localhost:8011
|
||||
```
|
||||
|
||||
## Authentication
|
||||
|
||||
All API endpoints require authentication using an API key. Include the API key in the request header:
|
||||
|
||||
```http
|
||||
X-API-Key: your_api_key_here
|
||||
```
|
||||
|
||||
Get your API key from the [AITBC Dashboard](https://dashboard.aitbc.io).
|
||||
|
||||
## Core Concepts
|
||||
|
||||
### Jobs
|
||||
Jobs are the primary unit of work in AITBC. They represent AI computations that need to be executed.
|
||||
|
||||
```json
|
||||
{
|
||||
"job_id": "job_1234567890",
|
||||
"type": "ai-inference",
|
||||
"status": "running",
|
||||
"created_at": "2024-01-01T12:00:00Z",
|
||||
"estimated_completion": "2024-01-01T12:05:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
### Marketplace
|
||||
The marketplace connects job creators with miners who can execute the jobs.
|
||||
|
||||
```json
|
||||
{
|
||||
"offer_id": "offer_1234567890",
|
||||
"job_type": "image-classification",
|
||||
"price": "0.001",
|
||||
"miner_id": "miner_1234567890"
|
||||
}
|
||||
```
|
||||
|
||||
### Receipts
|
||||
Receipts provide cryptographic proof of job execution and results.
|
||||
|
||||
```json
|
||||
{
|
||||
"receipt_id": "receipt_1234567890",
|
||||
"job_id": "job_1234567890",
|
||||
"signature": {
|
||||
"sig": "base64_signature",
|
||||
"public_key": "base64_public_key"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Rate Limits
|
||||
|
||||
API requests are rate-limited to ensure fair usage:
|
||||
|
||||
| Plan | Requests per minute | Burst |
|
||||
|------|---------------------|-------|
|
||||
| Free | 60 | 10 |
|
||||
| Pro | 600 | 100 |
|
||||
| Enterprise | 6000 | 1000 |
|
||||
|
||||
## Error Handling
|
||||
|
||||
The API uses standard HTTP status codes and returns detailed error messages:
|
||||
|
||||
```json
|
||||
{
|
||||
"error": {
|
||||
"code": "INVALID_API_KEY",
|
||||
"message": "The provided API key is invalid",
|
||||
"details": {
|
||||
"request_id": "req_1234567890"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Common error codes:
|
||||
- `400 Bad Request` - Invalid request parameters
|
||||
- `401 Unauthorized` - Invalid or missing API key
|
||||
- `403 Forbidden` - Insufficient permissions
|
||||
- `404 Not Found` - Resource not found
|
||||
- `429 Too Many Requests` - Rate limit exceeded
|
||||
- `500 Internal Server Error` - Server error
|
||||
|
||||
## SDK Support
|
||||
|
||||
Official SDKs are available for:
|
||||
- [Python](../../developer-guide/sdks/python.md)
|
||||
- [JavaScript/TypeScript](../../developer-guide/sdks/javascript.md)
|
||||
|
||||
## WebSocket API
|
||||
|
||||
Real-time updates are available through WebSocket connections:
|
||||
|
||||
```javascript
|
||||
const ws = new WebSocket('wss://api.aitbc.io/ws');
|
||||
|
||||
ws.onmessage = (event) => {
|
||||
const data = JSON.parse(event.data);
|
||||
console.log('Job update:', data);
|
||||
};
|
||||
```
|
||||
|
||||
## OpenAPI Specification
|
||||
|
||||
The complete OpenAPI 3.0 specification is available:
|
||||
- [View in Swagger UI](https://api.aitbc.io/docs)
|
||||
- [Download JSON](openapi.md)
|
||||
|
||||
## Getting Started
|
||||
|
||||
1. [Get an API key](https://dashboard.aitbc.io/api-keys)
|
||||
2. [Review authentication](authentication.md)
|
||||
3. [Explore endpoints](endpoints.md)
|
||||
4. [Check examples](../../developer-guide/examples.md)
|
||||
|
||||
## Support
|
||||
|
||||
- 📖 [Documentation](../../)
|
||||
- 💬 [Discord](https://discord.gg/aitbc)
|
||||
- 🐛 [Report Issues](https://github.com/aitbc/issues)
|
||||
- 📧 [api-support@aitbc.io](mailto:api-support@aitbc.io)
|
||||
99
docs/developer/contributing.md
Normal file
99
docs/developer/contributing.md
Normal file
@ -0,0 +1,99 @@
|
||||
---
|
||||
title: Contributing
|
||||
description: How to contribute to the AITBC project
|
||||
---
|
||||
|
||||
# Contributing to AITBC
|
||||
|
||||
We welcome contributions from the community! This guide will help you get started.
|
||||
|
||||
## Ways to Contribute
|
||||
|
||||
### Code Contributions
|
||||
- Fix bugs
|
||||
- Add features
|
||||
- Improve performance
|
||||
- Write tests
|
||||
|
||||
### Documentation
|
||||
- Improve docs
|
||||
- Add examples
|
||||
- Translate content
|
||||
- Fix typos
|
||||
|
||||
### Community
|
||||
- Answer questions
|
||||
- Report issues
|
||||
- Share feedback
|
||||
- Organize events
|
||||
|
||||
## Getting Started
|
||||
|
||||
### 1. Fork Repository
|
||||
```bash
|
||||
git clone https://github.com/your-username/aitbc.git
|
||||
cd aitbc
|
||||
```
|
||||
|
||||
### 2. Setup Development Environment
|
||||
```bash
|
||||
# Install dependencies
|
||||
pip install -r requirements-dev.txt
|
||||
|
||||
# Run tests
|
||||
pytest
|
||||
|
||||
# Start development server
|
||||
aitbc dev start
|
||||
```
|
||||
|
||||
### 3. Create Branch
|
||||
```bash
|
||||
git checkout -b feature/your-feature-name
|
||||
```
|
||||
|
||||
## Development Workflow
|
||||
|
||||
### Code Style
|
||||
- Follow PEP 8 for Python
|
||||
- Use ESLint for JavaScript
|
||||
- Write clear commit messages
|
||||
- Add tests for new features
|
||||
|
||||
### Testing
|
||||
```bash
|
||||
# Run all tests
|
||||
pytest
|
||||
|
||||
# Run specific test
|
||||
pytest tests/test_jobs.py
|
||||
|
||||
# Check coverage
|
||||
pytest --cov=aitbc
|
||||
```
|
||||
|
||||
### Submitting Changes
|
||||
1. Push to your fork
|
||||
2. Create pull request
|
||||
3. Wait for review
|
||||
4. Address feedback
|
||||
5. Merge!
|
||||
|
||||
## Reporting Issues
|
||||
|
||||
- Use GitHub Issues
|
||||
- Provide clear description
|
||||
- Include reproduction steps
|
||||
- Add relevant logs
|
||||
|
||||
## Code of Conduct
|
||||
|
||||
Please read and follow our [Code of Conduct](https://github.com/aitbc/blob/main/CODE_OF_CONDUCT.md).
|
||||
|
||||
## Getting Help
|
||||
|
||||
- Discord: https://discord.gg/aitbc
|
||||
- Email: dev@aitbc.io
|
||||
- Documentation: https://docs.aitbc.io
|
||||
|
||||
Thank you for contributing! 🎉
|
||||
131
docs/developer/examples.md
Normal file
131
docs/developer/examples.md
Normal file
@ -0,0 +1,131 @@
|
||||
---
|
||||
title: Code Examples
|
||||
description: Practical examples for building on AITBC
|
||||
---
|
||||
|
||||
# Code Examples
|
||||
|
||||
This section provides practical examples for common tasks on the AITBC platform.
|
||||
|
||||
## Python Examples
|
||||
|
||||
### Basic Job Submission
|
||||
```python
|
||||
from aitbc import AITBCClient
|
||||
|
||||
client = AITBCClient(api_key="your_key")
|
||||
|
||||
job = client.jobs.create({
|
||||
"name": "image-classification",
|
||||
"type": "ai-inference",
|
||||
"model": {
|
||||
"type": "python",
|
||||
"entrypoint": "model.py",
|
||||
"requirements": ["torch", "pillow"]
|
||||
}
|
||||
})
|
||||
|
||||
result = client.jobs.wait_for_completion(job["job_id"])
|
||||
```
|
||||
|
||||
### Batch Job Processing
|
||||
```python
|
||||
import asyncio
|
||||
from aitbc import AsyncAITBCClient
|
||||
|
||||
async def process_images(image_paths):
|
||||
client = AsyncAITBCClient(api_key="your_key")
|
||||
|
||||
tasks = []
|
||||
for path in image_paths:
|
||||
job = await client.jobs.create({
|
||||
"name": f"process-{path}",
|
||||
"type": "image-analysis"
|
||||
})
|
||||
tasks.append(client.jobs.wait_for_completion(job["job_id"]))
|
||||
|
||||
results = await asyncio.gather(*tasks)
|
||||
return results
|
||||
```
|
||||
|
||||
## JavaScript Examples
|
||||
|
||||
### React Component
|
||||
```jsx
|
||||
import React, { useState, useEffect } from 'react';
|
||||
import { AITBCClient } from '@aitbc/client';
|
||||
|
||||
function JobList() {
|
||||
const [jobs, setJobs] = useState([]);
|
||||
const client = new AITBCClient({ apiKey: 'your_key' });
|
||||
|
||||
useEffect(() => {
|
||||
async function fetchJobs() {
|
||||
const jobList = await client.jobs.list();
|
||||
setJobs(jobList);
|
||||
}
|
||||
fetchJobs();
|
||||
}, []);
|
||||
|
||||
return (
|
||||
<div>
|
||||
{jobs.map(job => (
|
||||
<div key={job.jobId}>
|
||||
<h3>{job.name}</h3>
|
||||
<p>Status: {job.status}</p>
|
||||
</div>
|
||||
))}
|
||||
</div>
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
### WebSocket Integration
|
||||
```javascript
|
||||
const client = new AITBCClient({ apiKey: 'your_key' });
|
||||
const ws = client.websocket.connect();
|
||||
|
||||
ws.on('jobUpdate', (data) => {
|
||||
console.log(`Job ${data.jobId} updated to ${data.status}`);
|
||||
});
|
||||
|
||||
ws.subscribe('jobs');
|
||||
ws.start();
|
||||
```
|
||||
|
||||
## CLI Examples
|
||||
|
||||
### Job Management
|
||||
```bash
|
||||
# Create job from file
|
||||
aitbc job create job.yaml
|
||||
|
||||
# List all jobs
|
||||
aitbc job list --status running
|
||||
|
||||
# Monitor job progress
|
||||
aitbc job watch <job_id>
|
||||
|
||||
# Download results
|
||||
aitbc job download <job_id> --output ./results/
|
||||
```
|
||||
|
||||
### Marketplace Operations
|
||||
```bash
|
||||
# List available offers
|
||||
aitbc marketplace list --type image-classification
|
||||
|
||||
# Create offer as miner
|
||||
aitbc marketplace create-offer offer.yaml
|
||||
|
||||
# Accept offer
|
||||
aitbc marketplace accept <offer_id> --job-id <job_id>
|
||||
```
|
||||
|
||||
## Complete Examples
|
||||
|
||||
Find full working examples in our GitHub repositories:
|
||||
- [Python SDK Examples](https://github.com/aitbc/python-sdk/tree/main/examples)
|
||||
- [JavaScript SDK Examples](https://github.com/aitbc/js-sdk/tree/main/examples)
|
||||
- [CLI Examples](https://github.com/aitbc/cli/tree/main/examples)
|
||||
- [Smart Contract Examples](https://github.com/aitbc/contracts/tree/main/examples)
|
||||
46
docs/developer/index.md
Normal file
46
docs/developer/index.md
Normal file
@ -0,0 +1,46 @@
|
||||
# AITBC Developer Documentation
|
||||
|
||||
Welcome to the AITBC developer documentation. This section contains resources for building on AITBC.
|
||||
|
||||
## Getting Started
|
||||
|
||||
- [Overview](overview.md) - Developer platform overview
|
||||
- [Setup](setup.md) - Development environment setup
|
||||
- [Contributing](contributing.md) - How to contribute to AITBC
|
||||
|
||||
## API Documentation
|
||||
|
||||
- [API Overview](api/overview.md) - REST API introduction
|
||||
- [Authentication](api/authentication.md) - API authentication guide
|
||||
- [Endpoints](api/endpoints.md) - Available API endpoints
|
||||
- [OpenAPI Spec](api/openapi.md) - OpenAPI specification
|
||||
|
||||
## SDKs
|
||||
|
||||
- [Python SDK](sdks/python.md) - Python SDK documentation
|
||||
- [JavaScript SDK](sdks/javascript.md) - JavaScript SDK documentation
|
||||
|
||||
## Tutorials & Examples
|
||||
|
||||
- [Examples](examples.md) - Code examples and tutorials
|
||||
- [API Authentication](api-authentication.md) - Authentication examples
|
||||
|
||||
## Architecture
|
||||
|
||||
- [Architecture Guide](../reference/architecture/) - System architecture documentation
|
||||
- [Design Patterns](../reference/architecture/) - Common patterns and best practices
|
||||
|
||||
## Testing
|
||||
|
||||
- [Testing Guide](testing.md) - How to test your AITBC applications
|
||||
- [Test Examples](../examples/) - Test code examples
|
||||
|
||||
## Deployment
|
||||
|
||||
- [Deployment Guide](../operator/deployment/) - How to deploy AITBC applications
|
||||
- [CI/CD](../operator/deployment/) - Continuous integration and deployment
|
||||
|
||||
## Reference
|
||||
|
||||
- [Glossary](../reference/glossary.md) - Terms and definitions
|
||||
- [FAQ](../user-guide/faq.md) - Frequently asked questions
|
||||
269
docs/developer/overview.md
Normal file
269
docs/developer/overview.md
Normal file
@ -0,0 +1,269 @@
|
||||
---
|
||||
title: Developer Overview
|
||||
description: Introduction to developing on the AITBC platform
|
||||
---
|
||||
|
||||
# Developer Overview
|
||||
|
||||
Welcome to the AITBC developer documentation! This guide will help you understand how to build applications and services on the AITBC blockchain platform.
|
||||
|
||||
## What You Can Build on AITBC
|
||||
|
||||
### AI/ML Applications
|
||||
- **Inference Services**: Deploy and monetize AI models
|
||||
- **Training Services**: Offer distributed model training
|
||||
- **Data Processing**: Build data pipelines with verifiable computation
|
||||
|
||||
### DeFi Applications
|
||||
- **Prediction Markets**: Create markets for AI predictions
|
||||
- **Computational Derivatives**: Financial products based on AI outcomes
|
||||
- **Staking Pools**: Earn rewards by providing compute resources
|
||||
|
||||
### NFT & Gaming
|
||||
- **Generative Art**: Create AI-powered NFT generators
|
||||
- **Dynamic NFTs**: NFTs that evolve based on AI computations
|
||||
- **AI Gaming**: Games with AI-driven mechanics
|
||||
|
||||
### Infrastructure Tools
|
||||
- **Oracles**: Bridge real-world data to blockchain
|
||||
- **Monitoring Tools**: Track network performance
|
||||
- **Development Tools**: SDKs, frameworks, and utilities
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
```mermaid
|
||||
graph TB
|
||||
subgraph "Developer Tools"
|
||||
A[Python SDK] --> E[Coordinator API]
|
||||
B[JS SDK] --> E
|
||||
C[CLI Tools] --> E
|
||||
D[Smart Contracts] --> F[Blockchain]
|
||||
end
|
||||
|
||||
subgraph "AITBC Platform"
|
||||
E --> G[Marketplace]
|
||||
F --> H[Miners/Validators]
|
||||
G --> I[Job Execution]
|
||||
end
|
||||
|
||||
subgraph "External Services"
|
||||
J[AI Models] --> I
|
||||
K[Storage] --> I
|
||||
L[Oracles] --> F
|
||||
end
|
||||
```
|
||||
|
||||
## Key Concepts
|
||||
|
||||
### Jobs
|
||||
Jobs are the fundamental unit of computation on AITBC. They represent AI tasks that need to be executed by miners.
|
||||
|
||||
### Smart Contracts
|
||||
AITBC uses smart contracts for:
|
||||
- Marketplace operations
|
||||
- Payment processing
|
||||
- Dispute resolution
|
||||
- Governance
|
||||
|
||||
### Proofs & Receipts
|
||||
All computations generate cryptographic proofs:
|
||||
- **Execution Proofs**: Verify correct computation
|
||||
- **Receipts**: Proof of job completion
|
||||
- **Attestations**: Multiple validator signatures
|
||||
|
||||
### Tokens & Economics
|
||||
- **AITBC Token**: Native utility token
|
||||
- **Job Payments**: Pay for computation
|
||||
- **Staking**: Secure the network
|
||||
- **Rewards**: Earn for providing services
|
||||
|
||||
## Development Stack
|
||||
|
||||
### Core Technologies
|
||||
- **Blockchain**: Custom PoS consensus
|
||||
- **Smart Contracts**: Solidity-compatible
|
||||
- **APIs**: RESTful with OpenAPI specs
|
||||
- **WebSockets**: Real-time updates
|
||||
|
||||
### Languages & Frameworks
|
||||
- **Python**: Primary SDK and ML support
|
||||
- **JavaScript/TypeScript**: Web and Node.js support
|
||||
- **Rust**: High-performance components
|
||||
- **Go**: Infrastructure services
|
||||
|
||||
### Tools & Libraries
|
||||
- **Docker**: Containerization
|
||||
- **Kubernetes**: Orchestration
|
||||
- **Prometheus**: Monitoring
|
||||
- **Grafana**: Visualization
|
||||
|
||||
## Getting Started
|
||||
|
||||
### 1. Set Up Development Environment
|
||||
|
||||
```bash
|
||||
# Install AITBC CLI
|
||||
pip install aitbc-cli
|
||||
|
||||
# Initialize project
|
||||
aitbc init my-project
|
||||
cd my-project
|
||||
|
||||
# Start local development
|
||||
aitbc dev start
|
||||
```
|
||||
|
||||
### 2. Choose Your Path
|
||||
|
||||
#### AI/ML Developer
|
||||
- Focus on model integration
|
||||
- Learn about job specifications
|
||||
- Understand proof generation
|
||||
|
||||
#### DApp Developer
|
||||
- Study smart contract patterns
|
||||
- Master the SDKs
|
||||
- Build user interfaces
|
||||
|
||||
#### Infrastructure Developer
|
||||
- Run a node or miner
|
||||
- Build tools and utilities
|
||||
- Contribute to core protocol
|
||||
|
||||
### 3. Build Your First Application
|
||||
|
||||
Choose a tutorial based on your interest:
|
||||
|
||||
- [AI Inference Service](../../tutorials/building-dapp.md)
|
||||
- [Marketplace Bot](../../tutorials/integration-examples.md)
|
||||
- [Mining Operation](../../tutorials/mining-setup.md)
|
||||
|
||||
## Developer Resources
|
||||
|
||||
### Documentation
|
||||
- [API Reference](../api/)
|
||||
- [SDK Guides](sdks/)
|
||||
- [Examples](examples.md)
|
||||
- [Best Practices](best-practices.md)
|
||||
|
||||
### Tools
|
||||
- [AITBC CLI](tools/cli.md)
|
||||
- [IDE Plugins](tools/ide-plugins.md)
|
||||
- [Testing Framework](tools/testing.md)
|
||||
|
||||
### Community
|
||||
- [Discord](https://discord.gg/aitbc)
|
||||
- [GitHub Discussions](https://github.com/aitbc/discussions)
|
||||
- [Stack Overflow](https://stackoverflow.com/questions/tagged/aitbc)
|
||||
|
||||
## Development Workflow
|
||||
|
||||
### 1. Local Development
|
||||
```bash
|
||||
# Start local testnet
|
||||
aitbc dev start
|
||||
|
||||
# Run tests
|
||||
aitbc test
|
||||
|
||||
# Deploy locally
|
||||
aitbc deploy --local
|
||||
```
|
||||
|
||||
### 2. Testnet Deployment
|
||||
```bash
|
||||
# Configure for testnet
|
||||
aitbc config set network testnet
|
||||
|
||||
# Deploy to testnet
|
||||
aitbc deploy --testnet
|
||||
|
||||
# Verify deployment
|
||||
aitbc status
|
||||
```
|
||||
|
||||
### 3. Production Deployment
|
||||
```bash
|
||||
# Configure for mainnet
|
||||
aitbc config set network mainnet
|
||||
|
||||
# Deploy to production
|
||||
aitbc deploy --mainnet
|
||||
|
||||
# Monitor deployment
|
||||
aitbc monitor
|
||||
```
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### Smart Contract Security
|
||||
- Follow established patterns
|
||||
- Use audited libraries
|
||||
- Test thoroughly
|
||||
- Consider formal verification
|
||||
|
||||
### API Security
|
||||
- Use API keys properly
|
||||
- Implement rate limiting
|
||||
- Validate inputs
|
||||
- Use HTTPS everywhere
|
||||
|
||||
### Key Management
|
||||
- Never commit private keys
|
||||
- Use hardware wallets
|
||||
- Implement multi-sig
|
||||
- Regular key rotation
|
||||
|
||||
## Performance Optimization
|
||||
|
||||
### Job Optimization
|
||||
- Minimize computation overhead
|
||||
- Use efficient data formats
|
||||
- Batch operations when possible
|
||||
- Profile and benchmark
|
||||
|
||||
### Cost Optimization
|
||||
- Optimize resource usage
|
||||
- Use spot instances when possible
|
||||
- Implement caching
|
||||
- Monitor spending
|
||||
|
||||
## Contributing to AITBC
|
||||
|
||||
We welcome contributions! Areas where you can help:
|
||||
|
||||
### Core Protocol
|
||||
- Consensus improvements
|
||||
- New cryptographic primitives
|
||||
- Performance optimizations
|
||||
- Bug fixes
|
||||
|
||||
### Developer Tools
|
||||
- SDK improvements
|
||||
- New language support
|
||||
- Better documentation
|
||||
- Tooling enhancements
|
||||
|
||||
### Ecosystem
|
||||
- Sample applications
|
||||
- Tutorials and guides
|
||||
- Community support
|
||||
- Integration examples
|
||||
|
||||
See our [Contributing Guide](contributing.md) for details.
|
||||
|
||||
## Support
|
||||
|
||||
- 📖 [Documentation](../)
|
||||
- 💬 [Discord](https://discord.gg/aitbc)
|
||||
- 🐛 [Issue Tracker](https://github.com/aitbc/issues)
|
||||
- 📧 [dev-support@aitbc.io](mailto:dev-support@aitbc.io)
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. [Set up your environment](setup.md)
|
||||
2. [Learn about authentication](api-authentication.md)
|
||||
3. [Choose an SDK](sdks/)
|
||||
4. [Build your first app](../../tutorials/)
|
||||
|
||||
Happy building! 🚀
|
||||
279
docs/developer/sdks/javascript.md
Normal file
279
docs/developer/sdks/javascript.md
Normal file
@ -0,0 +1,279 @@
|
||||
---
|
||||
title: JavaScript SDK
|
||||
description: JavaScript/TypeScript SDK for AITBC platform integration
|
||||
---
|
||||
|
||||
# JavaScript SDK
|
||||
|
||||
The AITBC JavaScript SDK provides a convenient way to interact with the AITBC platform from JavaScript and TypeScript applications.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
# npm
|
||||
npm install @aitbc/client
|
||||
|
||||
# yarn
|
||||
yarn add @aitbc/client
|
||||
|
||||
# pnpm
|
||||
pnpm add @aitbc/client
|
||||
```
|
||||
|
||||
## Quick Start
|
||||
|
||||
```javascript
|
||||
import { AITBCClient } from '@aitbc/client';
|
||||
|
||||
// Initialize the client
|
||||
const client = new AITBCClient({
|
||||
apiKey: 'your_api_key_here',
|
||||
baseUrl: 'https://api.aitbc.io'
|
||||
});
|
||||
|
||||
// Create a job
|
||||
const job = await client.jobs.create({
|
||||
name: 'image-classification',
|
||||
type: 'ai-inference',
|
||||
model: {
|
||||
type: 'python',
|
||||
entrypoint: 'model.js'
|
||||
}
|
||||
});
|
||||
|
||||
console.log('Job created:', job.jobId);
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
### Environment Variables
|
||||
```bash
|
||||
AITBC_API_KEY=your_api_key
|
||||
AITBC_BASE_URL=https://api.aitbc.io
|
||||
AITBC_NETWORK=mainnet
|
||||
```
|
||||
|
||||
### Code Configuration
|
||||
```javascript
|
||||
const client = new AITBCClient({
|
||||
apiKey: process.env.AITBC_API_KEY,
|
||||
baseUrl: process.env.AITBC_BASE_URL,
|
||||
timeout: 30000,
|
||||
retries: 3
|
||||
});
|
||||
```
|
||||
|
||||
## Jobs API
|
||||
|
||||
### Create a Job
|
||||
```javascript
|
||||
const job = await client.jobs.create({
|
||||
name: 'my-ai-job',
|
||||
type: 'ai-inference',
|
||||
model: {
|
||||
type: 'javascript',
|
||||
entrypoint: 'model.js',
|
||||
dependencies: ['@tensorflow/tfjs']
|
||||
},
|
||||
input: {
|
||||
type: 'image',
|
||||
format: 'jpeg'
|
||||
},
|
||||
output: {
|
||||
type: 'json'
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
### Monitor Job Progress
|
||||
```javascript
|
||||
// Get job status
|
||||
const status = await client.jobs.getStatus(job.jobId);
|
||||
console.log('Status:', status.status);
|
||||
|
||||
// Stream updates
|
||||
client.jobs.onUpdate(job.jobId, (update) => {
|
||||
console.log('Update:', update);
|
||||
});
|
||||
|
||||
// Wait for completion
|
||||
const result = await client.jobs.waitForCompletion(job.jobId, {
|
||||
timeout: 300000,
|
||||
pollInterval: 5000
|
||||
});
|
||||
```
|
||||
|
||||
## Marketplace API
|
||||
|
||||
### List Offers
|
||||
```javascript
|
||||
const offers = await client.marketplace.listOffers({
|
||||
jobType: 'image-classification',
|
||||
maxPrice: '0.01'
|
||||
});
|
||||
|
||||
offers.forEach(offer => {
|
||||
console.log(`Offer: ${offer.offerId}, Price: ${offer.price}`);
|
||||
});
|
||||
```
|
||||
|
||||
### Accept Offer
|
||||
```javascript
|
||||
const transaction = await client.marketplace.acceptOffer({
|
||||
offerId: 'offer_123',
|
||||
jobId: 'job_456',
|
||||
bidPrice: '0.001'
|
||||
});
|
||||
```
|
||||
|
||||
## Wallet API
|
||||
|
||||
### Wallet Operations
|
||||
```javascript
|
||||
// Get balance
|
||||
const balance = await client.wallet.getBalance();
|
||||
console.log('Balance:', balance);
|
||||
|
||||
// Send tokens
|
||||
const tx = await client.wallet.send({
|
||||
to: '0x123...',
|
||||
amount: '1.0',
|
||||
token: 'AITBC'
|
||||
});
|
||||
|
||||
// Stake tokens
|
||||
await client.wallet.stake({
|
||||
amount: '100.0'
|
||||
});
|
||||
```
|
||||
|
||||
## WebSocket API
|
||||
|
||||
### Real-time Updates
|
||||
```javascript
|
||||
// Connect to WebSocket
|
||||
const ws = client.websocket.connect();
|
||||
|
||||
// Subscribe to events
|
||||
ws.subscribe('jobs', { jobId: 'job_123' });
|
||||
ws.subscribe('marketplace');
|
||||
|
||||
// Handle events
|
||||
ws.on('jobUpdate', (data) => {
|
||||
console.log('Job updated:', data);
|
||||
});
|
||||
|
||||
ws.on('marketplaceUpdate', (data) => {
|
||||
console.log('Marketplace updated:', data);
|
||||
});
|
||||
|
||||
// Start listening
|
||||
ws.start();
|
||||
```
|
||||
|
||||
## TypeScript Support
|
||||
|
||||
The SDK is fully typed for TypeScript:
|
||||
|
||||
```typescript
|
||||
import { AITBCClient, Job, JobStatus } from '@aitbc/client';
|
||||
|
||||
const client: AITBCClient = new AITBCClient({
|
||||
apiKey: 'your_key'
|
||||
});
|
||||
|
||||
const job: Job = await client.jobs.create({
|
||||
name: 'typed-job',
|
||||
type: 'ai-inference'
|
||||
});
|
||||
|
||||
const status: JobStatus = await client.jobs.getStatus(job.jobId);
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
```javascript
|
||||
import {
|
||||
AITBCError,
|
||||
APIError,
|
||||
AuthenticationError,
|
||||
NotFoundError,
|
||||
RateLimitError
|
||||
} from '@aitbc/client';
|
||||
|
||||
try {
|
||||
const job = await client.jobs.create({});
|
||||
} catch (error) {
|
||||
if (error instanceof AuthenticationError) {
|
||||
console.error('Invalid API key');
|
||||
} else if (error instanceof RateLimitError) {
|
||||
console.error(`Rate limited. Retry in ${error.retryAfter}ms`);
|
||||
} else if (error instanceof APIError) {
|
||||
console.error(`API error: ${error.message}`);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## React Integration
|
||||
|
||||
```jsx
|
||||
import React, { useState, useEffect } from 'react';
|
||||
import { AITBCClient } from '@aitbc/client';
|
||||
|
||||
function JobComponent() {
|
||||
const [jobs, setJobs] = useState([]);
|
||||
const client = new AITBCClient({ apiKey: 'your_key' });
|
||||
|
||||
useEffect(() => {
|
||||
async function fetchJobs() {
|
||||
const jobList = await client.jobs.list();
|
||||
setJobs(jobList);
|
||||
}
|
||||
fetchJobs();
|
||||
}, []);
|
||||
|
||||
return (
|
||||
<div>
|
||||
{jobs.map(job => (
|
||||
<div key={job.jobId}>{job.name}</div>
|
||||
))}
|
||||
</div>
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
## Node.js Integration
|
||||
|
||||
```javascript
|
||||
const express = require('express');
|
||||
const { AITBCClient } = require('@aitbc/client');
|
||||
|
||||
const app = express();
|
||||
const client = new AITBCClient({ apiKey: process.env.API_KEY });
|
||||
|
||||
app.post('/jobs', async (req, res) => {
|
||||
try {
|
||||
const job = await client.jobs.create(req.body);
|
||||
res.json(job);
|
||||
} catch (error) {
|
||||
res.status(500).json({ error: error.message });
|
||||
}
|
||||
});
|
||||
|
||||
app.listen(3000);
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
Check out the [examples directory](https://github.com/aitbc/js-sdk/tree/main/examples) for complete working examples:
|
||||
|
||||
- [Basic Job Submission](https://github.com/aitbc/js-sdk/blob/main/examples/basic-job.js)
|
||||
- [React Integration](https://github.com/aitbc/js-sdk/blob/main/examples/react-app/)
|
||||
- [WebSocket Streaming](https://github.com/aitbc/js-sdk/blob/main/examples/websocket.js)
|
||||
|
||||
## Support
|
||||
|
||||
- 📖 [Documentation](../../)
|
||||
- 🐛 [Issue Tracker](https://github.com/aitbc/js-sdk/issues)
|
||||
- 💬 [Discord](https://discord.gg/aitbc)
|
||||
- 📧 [js-sdk@aitbc.io](mailto:js-sdk@aitbc.io)
|
||||
494
docs/developer/sdks/python.md
Normal file
494
docs/developer/sdks/python.md
Normal file
@ -0,0 +1,494 @@
|
||||
---
|
||||
title: Python SDK
|
||||
description: Python SDK for AITBC platform integration
|
||||
---
|
||||
|
||||
# Python SDK
|
||||
|
||||
The AITBC Python SDK provides a convenient way to interact with the AITBC platform from Python applications. It includes support for job management, marketplace operations, wallet management, and more.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
# Install from PyPI
|
||||
pip install aitbc
|
||||
|
||||
# Or install from source
|
||||
git clone https://github.com/aitbc/python-sdk.git
|
||||
cd python-sdk
|
||||
pip install -e .
|
||||
```
|
||||
|
||||
## Quick Start
|
||||
|
||||
```python
|
||||
from aitbc import AITBCClient
|
||||
|
||||
# Initialize the client
|
||||
client = AITBCClient(
|
||||
api_key="your_api_key_here",
|
||||
base_url="https://api.aitbc.io" # or http://localhost:8011 for dev
|
||||
)
|
||||
|
||||
# Create a job
|
||||
job = client.jobs.create({
|
||||
"name": "image-classification",
|
||||
"type": "ai-inference",
|
||||
"model": {
|
||||
"type": "python",
|
||||
"entrypoint": "model.py"
|
||||
}
|
||||
})
|
||||
|
||||
# Wait for completion
|
||||
result = client.jobs.wait_for_completion(job["job_id"])
|
||||
print(f"Result: {result}")
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
### Environment Variables
|
||||
```bash
|
||||
export AITBC_API_KEY="your_api_key"
|
||||
export AITBC_BASE_URL="https://api.aitbc.io"
|
||||
export AITBC_NETWORK="mainnet" # or testnet
|
||||
```
|
||||
|
||||
### Code Configuration
|
||||
```python
|
||||
from aitbc import AITBCClient, Config
|
||||
|
||||
# Using Config object
|
||||
config = Config(
|
||||
api_key="your_api_key",
|
||||
base_url="https://api.aitbc.io",
|
||||
timeout=30,
|
||||
retries=3
|
||||
)
|
||||
|
||||
client = AITBCClient(config=config)
|
||||
```
|
||||
|
||||
## Jobs API
|
||||
|
||||
### Create a Job
|
||||
|
||||
```python
|
||||
# Basic job creation
|
||||
job = client.jobs.create({
|
||||
"name": "my-ai-job",
|
||||
"type": "ai-inference",
|
||||
"model": {
|
||||
"type": "python",
|
||||
"entrypoint": "model.py",
|
||||
"requirements": ["numpy", "torch"]
|
||||
},
|
||||
"input": {
|
||||
"type": "image",
|
||||
"format": "jpeg"
|
||||
},
|
||||
"output": {
|
||||
"type": "json"
|
||||
},
|
||||
"resources": {
|
||||
"cpu": "1000m",
|
||||
"memory": "2Gi"
|
||||
},
|
||||
"pricing": {
|
||||
"max_cost": "0.10"
|
||||
}
|
||||
})
|
||||
|
||||
print(f"Job created: {job['job_id']}")
|
||||
```
|
||||
|
||||
### Upload Job Data
|
||||
|
||||
```python
|
||||
# Upload input files
|
||||
with open("input.jpg", "rb") as f:
|
||||
client.jobs.upload_input(job["job_id"], f, "image.jpg")
|
||||
|
||||
# Or upload multiple files
|
||||
files = [
|
||||
("image1.jpg", open("image1.jpg", "rb")),
|
||||
("image2.jpg", open("image2.jpg", "rb"))
|
||||
]
|
||||
client.jobs.upload_inputs(job["job_id"], files)
|
||||
```
|
||||
|
||||
### Monitor Job Progress
|
||||
|
||||
```python
|
||||
# Get job status
|
||||
status = client.jobs.get_status(job["job_id"])
|
||||
print(f"Status: {status['status']}")
|
||||
|
||||
# Stream updates
|
||||
for update in client.jobs.stream_updates(job["job_id"]):
|
||||
print(f"Update: {update}")
|
||||
|
||||
# Wait for completion with timeout
|
||||
result = client.jobs.wait_for_completion(
|
||||
job["job_id"],
|
||||
timeout=300, # 5 minutes
|
||||
poll_interval=5
|
||||
)
|
||||
```
|
||||
|
||||
### Get Results
|
||||
|
||||
```python
|
||||
# Get job results
|
||||
results = client.jobs.get_results(job["job_id"])
|
||||
print(f"Results: {results}")
|
||||
|
||||
# Download output files
|
||||
client.jobs.download_output(job["job_id"], "output/")
|
||||
client.jobs.download_outputs(job["job_id"], "outputs/") # All files
|
||||
```
|
||||
|
||||
## Marketplace API
|
||||
|
||||
### List Available Offers
|
||||
|
||||
```python
|
||||
# List all offers
|
||||
offers = client.marketplace.list_offers()
|
||||
|
||||
# Filter by job type
|
||||
offers = client.marketplace.list_offers(
|
||||
job_type="image-classification",
|
||||
max_price="0.01"
|
||||
)
|
||||
|
||||
for offer in offers:
|
||||
print(f"Offer: {offer['offer_id']}, Price: {offer['price']}")
|
||||
```
|
||||
|
||||
### Create and Manage Offers
|
||||
|
||||
```python
|
||||
# Create an offer (as a miner)
|
||||
offer = client.marketplace.create_offer({
|
||||
"job_type": "image-classification",
|
||||
"price": "0.001",
|
||||
"max_jobs": 10,
|
||||
"requirements": {
|
||||
"min_gpu_memory": "4Gi"
|
||||
}
|
||||
})
|
||||
|
||||
# Update offer
|
||||
client.marketplace.update_offer(
|
||||
offer["offer_id"],
|
||||
price="0.002"
|
||||
)
|
||||
|
||||
# Cancel offer
|
||||
client.marketplace.cancel_offer(offer["offer_id"])
|
||||
```
|
||||
|
||||
### Accept Offers
|
||||
|
||||
```python
|
||||
# Accept an offer for your job
|
||||
transaction = client.marketplace.accept_offer(
|
||||
offer_id="offer_123",
|
||||
job_id="job_456",
|
||||
bid_price="0.001"
|
||||
)
|
||||
|
||||
print(f"Transaction: {transaction['transaction_id']}")
|
||||
```
|
||||
|
||||
## Wallet API
|
||||
|
||||
### Wallet Management
|
||||
|
||||
```python
|
||||
# Create a new wallet
|
||||
wallet = client.wallet.create()
|
||||
print(f"Address: {wallet['address']}")
|
||||
|
||||
# Import existing wallet
|
||||
wallet = client.wallet.import_private_key("your_private_key")
|
||||
|
||||
# Get wallet info
|
||||
balance = client.wallet.get_balance()
|
||||
address = client.wallet.get_address()
|
||||
```
|
||||
|
||||
### Transactions
|
||||
|
||||
```python
|
||||
# Send tokens
|
||||
tx = client.wallet.send(
|
||||
to="0x123...",
|
||||
amount="1.0",
|
||||
token="AITBC"
|
||||
)
|
||||
|
||||
# Stake tokens
|
||||
client.wallet.stake(amount="100.0")
|
||||
|
||||
# Unstake tokens
|
||||
client.wallet.unstake(amount="50.0")
|
||||
|
||||
# Get transaction history
|
||||
history = client.wallet.get_transactions(limit=50)
|
||||
```
|
||||
|
||||
## Receipts API
|
||||
|
||||
### Verify Receipts
|
||||
|
||||
```python
|
||||
# Get a receipt
|
||||
receipt = client.receipts.get(job_id="job_123")
|
||||
|
||||
# Verify a receipt
|
||||
verification = client.receipts.verify(receipt)
|
||||
print(f"Valid: {verification['valid']}")
|
||||
|
||||
# Verify with local verification
|
||||
from aitbc.crypto import verify_receipt
|
||||
|
||||
is_valid = verify_receipt(receipt)
|
||||
```
|
||||
|
||||
### Stream Receipts
|
||||
|
||||
```python
|
||||
# Stream new receipts
|
||||
for receipt in client.receipts.stream():
|
||||
print(f"New receipt: {receipt['receipt_id']}")
|
||||
```
|
||||
|
||||
## WebSocket API
|
||||
|
||||
### Real-time Updates
|
||||
|
||||
```python
|
||||
# Connect to WebSocket
|
||||
ws = client.websocket.connect()
|
||||
|
||||
# Subscribe to job updates
|
||||
ws.subscribe("jobs", job_id="job_123")
|
||||
|
||||
# Subscribe to marketplace updates
|
||||
ws.subscribe("marketplace")
|
||||
|
||||
# Handle messages
|
||||
@ws.on_message
|
||||
def handle_message(message):
|
||||
print(f"Received: {message}")
|
||||
|
||||
# Start listening
|
||||
ws.listen()
|
||||
```
|
||||
|
||||
### Advanced WebSocket Usage
|
||||
|
||||
```python
|
||||
# Custom event handlers
|
||||
ws = client.websocket.connect()
|
||||
|
||||
@ws.on_job_update
|
||||
def on_job_update(job_id, status):
|
||||
print(f"Job {job_id} status: {status}")
|
||||
|
||||
@ws.on_marketplace_update
|
||||
def on_marketplace_update(update_type, data):
|
||||
print(f"Marketplace {update_type}: {data}")
|
||||
|
||||
# Run with context manager
|
||||
with client.websocket.connect() as ws:
|
||||
ws.subscribe("jobs")
|
||||
ws.listen(timeout=60)
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
```python
|
||||
from aitbc.exceptions import (
|
||||
AITBCError,
|
||||
APIError,
|
||||
AuthenticationError,
|
||||
NotFoundError,
|
||||
RateLimitError
|
||||
)
|
||||
|
||||
try:
|
||||
job = client.jobs.create({...})
|
||||
except AuthenticationError:
|
||||
print("Invalid API key")
|
||||
except RateLimitError as e:
|
||||
print(f"Rate limited. Retry in {e.retry_after} seconds")
|
||||
except APIError as e:
|
||||
print(f"API error: {e.message}")
|
||||
except AITBCError as e:
|
||||
print(f"AITBC error: {e}")
|
||||
```
|
||||
|
||||
## Advanced Usage
|
||||
|
||||
### Custom HTTP Client
|
||||
|
||||
```python
|
||||
import requests
|
||||
from aitbc import AITBCClient
|
||||
|
||||
# Use custom session
|
||||
session = requests.Session()
|
||||
session.headers.update({"User-Agent": "MyApp/1.0"})
|
||||
|
||||
client = AITBCClient(
|
||||
api_key="your_key",
|
||||
session=session
|
||||
)
|
||||
```
|
||||
|
||||
### Async Support
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from aitbc import AsyncAITBCClient
|
||||
|
||||
async def main():
|
||||
client = AsyncAITBCClient(api_key="your_key")
|
||||
|
||||
# Create job
|
||||
job = await client.jobs.create({...})
|
||||
|
||||
# Wait for completion
|
||||
result = await client.jobs.wait_for_completion(job["job_id"])
|
||||
|
||||
print(f"Result: {result}")
|
||||
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
### Batch Operations
|
||||
|
||||
```python
|
||||
# Create multiple jobs
|
||||
jobs = [
|
||||
{"name": f"job-{i}", "type": "ai-inference"}
|
||||
for i in range(10)
|
||||
]
|
||||
|
||||
created_jobs = client.jobs.create_batch(jobs)
|
||||
|
||||
# Get status of multiple jobs
|
||||
statuses = client.jobs.get_status_batch([
|
||||
job["job_id"] for job in created_jobs
|
||||
])
|
||||
```
|
||||
|
||||
## Testing
|
||||
|
||||
### Mock Client for Testing
|
||||
|
||||
```python
|
||||
from aitbc.testing import MockAITBCClient
|
||||
|
||||
# Use mock client for tests
|
||||
client = MockAITBCClient()
|
||||
|
||||
# Configure responses
|
||||
client.jobs.set_response("create", {"job_id": "test_job"})
|
||||
|
||||
# Test your code
|
||||
job = client.jobs.create({...})
|
||||
assert job["job_id"] == "test_job"
|
||||
```
|
||||
|
||||
### Integration Tests
|
||||
|
||||
```python
|
||||
import pytest
|
||||
from aitbc import AITBCClient
|
||||
|
||||
@pytest.fixture
|
||||
def client():
|
||||
return AITBCClient(
|
||||
api_key="test_key",
|
||||
base_url="http://localhost:8011"
|
||||
)
|
||||
|
||||
def test_job_creation(client):
|
||||
job = client.jobs.create({
|
||||
"name": "test-job",
|
||||
"type": "ai-inference"
|
||||
})
|
||||
assert "job_id" in job
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### 1. Configuration Management
|
||||
```python
|
||||
# Use environment variables
|
||||
import os
|
||||
from aitbc import AITBCClient
|
||||
|
||||
client = AITBCClient(
|
||||
api_key=os.getenv("AITBC_API_KEY"),
|
||||
base_url=os.getenv("AITBC_BASE_URL", "https://api.aitbc.io")
|
||||
)
|
||||
```
|
||||
|
||||
### 2. Error Handling
|
||||
```python
|
||||
# Always handle potential errors
|
||||
try:
|
||||
result = client.jobs.get_results(job_id)
|
||||
except NotFoundError:
|
||||
print("Job not found")
|
||||
except APIError as e:
|
||||
print(f"API error: {e}")
|
||||
```
|
||||
|
||||
### 3. Resource Management
|
||||
```python
|
||||
# Use context managers for resources
|
||||
with client.jobs.upload_context(job_id) as ctx:
|
||||
ctx.upload_file("model.py")
|
||||
ctx.upload_file("requirements.txt")
|
||||
```
|
||||
|
||||
### 4. Performance
|
||||
```python
|
||||
# Use async for concurrent operations
|
||||
async def process_jobs(job_ids):
|
||||
client = AsyncAITBCClient(api_key="your_key")
|
||||
|
||||
tasks = [
|
||||
client.jobs.get_results(job_id)
|
||||
for job_id in job_ids
|
||||
]
|
||||
|
||||
results = await asyncio.gather(*tasks)
|
||||
return results
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
Check out the [examples directory](https://github.com/aitbc/python-sdk/tree/main/examples) for complete working examples:
|
||||
|
||||
- [Basic Job Submission](https://github.com/aitbc/python-sdk/blob/main/examples/basic_job.py)
|
||||
- [Marketplace Bot](https://github.com/aitbc/python-sdk/blob/main/examples/marketplace_bot.py)
|
||||
- [Mining Operation](https://github.com/aitbc/python-sdk/blob/main/examples/mining.py)
|
||||
- [WebSocket Streaming](https://github.com/aitbc/python-sdk/blob/main/examples/websocket_streaming.py)
|
||||
|
||||
## Support
|
||||
|
||||
- 📖 [Documentation](../../)
|
||||
- 🐛 [Issue Tracker](https://github.com/aitbc/python-sdk/issues)
|
||||
- 💬 [Discord](https://discord.gg/aitbc)
|
||||
- 📧 [python-sdk@aitbc.io](mailto:python-sdk@aitbc.io)
|
||||
|
||||
## Changelog
|
||||
|
||||
See [CHANGELOG.md](https://github.com/aitbc/python-sdk/blob/main/CHANGELOG.md) for version history and updates.
|
||||
76
docs/developer/setup.md
Normal file
76
docs/developer/setup.md
Normal file
@ -0,0 +1,76 @@
|
||||
---
|
||||
title: Development Setup
|
||||
description: Set up your development environment for AITBC
|
||||
---
|
||||
|
||||
# Development Setup
|
||||
|
||||
This guide helps you set up a development environment for building on AITBC.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Python 3.8+
|
||||
- Git
|
||||
- Docker (optional)
|
||||
- Node.js 16+ (for frontend development)
|
||||
|
||||
## Local Development
|
||||
|
||||
### 1. Clone Repository
|
||||
```bash
|
||||
git clone https://github.com/aitbc/aitbc.git
|
||||
cd aitbc
|
||||
```
|
||||
|
||||
### 2. Install Dependencies
|
||||
```bash
|
||||
# Python dependencies
|
||||
pip install -r requirements.txt
|
||||
|
||||
# Development dependencies
|
||||
pip install -r requirements-dev.txt
|
||||
```
|
||||
|
||||
### 3. Start Services
|
||||
```bash
|
||||
# Using Docker Compose
|
||||
docker-compose -f docker-compose.dev.yml up -d
|
||||
|
||||
# Or start individually
|
||||
aitbc dev start
|
||||
```
|
||||
|
||||
### 4. Verify Setup
|
||||
```bash
|
||||
# Check services
|
||||
aitbc status
|
||||
|
||||
# Run tests
|
||||
pytest
|
||||
```
|
||||
|
||||
## IDE Setup
|
||||
|
||||
### VS Code
|
||||
Install extensions:
|
||||
- Python
|
||||
- Docker
|
||||
- GitLens
|
||||
|
||||
### PyCharm
|
||||
Configure Python interpreter and enable Docker integration.
|
||||
|
||||
## Environment Variables
|
||||
|
||||
Create `.env` file:
|
||||
```bash
|
||||
AITBC_API_KEY=your_dev_key
|
||||
AITBC_BASE_URL=http://localhost:8011
|
||||
AITBC_NETWORK=testnet
|
||||
```
|
||||
|
||||
## Next Steps
|
||||
|
||||
- [API Authentication](api-authentication.md)
|
||||
- [Python SDK](sdks/python.md)
|
||||
- [Examples](examples.md)
|
||||
70
docs/done.md
70
docs/done.md
@ -1,70 +0,0 @@
|
||||
# Completed Bootstrap Tasks
|
||||
|
||||
## Repository Initialization
|
||||
|
||||
- Scaffolded core monorepo directories reflected in `docs/bootstrap/dirs.md`.
|
||||
- Added top-level config files: `.editorconfig`, `.gitignore`, `LICENSE`, and root `README.md`.
|
||||
- Created Windsurf workspace metadata under `windsurf/`.
|
||||
|
||||
## Documentation
|
||||
|
||||
- Authored `docs/roadmap.md` capturing staged development targets.
|
||||
- Added README placeholders for primary apps under `apps/` to outline purpose and setup notes.
|
||||
|
||||
## Coordinator API
|
||||
|
||||
- Implemented SQLModel-backed job persistence and service layer in `apps/coordinator-api/src/app/`.
|
||||
- Wired client, miner, and admin routers to coordinator services (job lifecycle, scheduling, stats).
|
||||
- Added initial pytest coverage under `apps/coordinator-api/tests/test_jobs.py`.
|
||||
- Added signed receipt generation, persistence (`Job.receipt`, `JobReceipt` history table), retrieval endpoints, telemetry metrics, and optional coordinator attestations.
|
||||
- Persisted historical receipts via `JobReceipt`; exposed `/v1/jobs/{job_id}/receipts` endpoint and integrated canonical serialization.
|
||||
- Documented receipt attestation configuration (`RECEIPT_ATTESTATION_KEY_HEX`) in `docs/run.md` and coordinator README.
|
||||
|
||||
## Miner Node
|
||||
|
||||
- Created coordinator client, control loop, and capability/backoff utilities in `apps/miner-node/src/aitbc_miner/`.
|
||||
- Implemented CLI/Python runners and execution pipeline with result reporting.
|
||||
- Added starter tests for runners in `apps/miner-node/tests/test_runners.py`.
|
||||
|
||||
## Blockchain Node
|
||||
|
||||
- Added websocket fan-out, disconnect cleanup, and load-test coverage in `apps/blockchain-node/tests/test_websocket.py`, ensuring gossip topics deliver reliably to multiple subscribers.
|
||||
|
||||
## Directory Preparation
|
||||
|
||||
- Established scaffolds for Python and JavaScript packages in `packages/py/` and `packages/js/`.
|
||||
- Seeded example project directories under `examples/` for quickstart clients and receipt verification.
|
||||
- Added `examples/receipts-sign-verify/fetch_and_verify.py` demonstrating coordinator receipt fetching + verification using Python SDK.
|
||||
|
||||
## Python SDK
|
||||
|
||||
- Created `packages/py/aitbc-sdk/` with coordinator receipt client and verification helpers consuming `aitbc_crypto` utilities.
|
||||
- Added pytest coverage under `packages/py/aitbc-sdk/tests/test_receipts.py` validating miner/coordinator signature checks and client behavior.
|
||||
|
||||
## Wallet Daemon
|
||||
|
||||
- Added `apps/wallet-daemon/src/app/receipts/service.py` providing `ReceiptVerifierService` that fetches and validates receipts via `aitbc_sdk`.
|
||||
- Created unit tests under `apps/wallet-daemon/tests/test_receipts.py` verifying service behavior.
|
||||
- Implemented wallet SDK receipt ingestion + attestation surfacing in `packages/py/aitbc-sdk/src/receipts.py`, including pagination client, signature verification, and failure diagnostics with full pytest coverage.
|
||||
- Hardened REST API by wiring dependency overrides in `apps/wallet-daemon/tests/test_wallet_api.py`, expanding workflow coverage (create/list/unlock/sign) and enforcing structured password policy errors consumed in CI.
|
||||
|
||||
## Explorer Web
|
||||
|
||||
- Initialized a Vite + TypeScript scaffold in `apps/explorer-web/` with `vite.config.ts`, `tsconfig.json`, and placeholder `src/main.ts` content.
|
||||
- Installed frontend dependencies locally to unblock editor tooling and TypeScript type resolution.
|
||||
- Implemented `overview` page stats rendering backed by mock block/transaction/receipt fetchers, including robust empty-state handling and TypeScript type fixes.
|
||||
|
||||
## Pool Hub
|
||||
|
||||
- Implemented FastAPI service scaffolding with Redis/PostgreSQL-backed repositories, match/health/metrics endpoints, and Prometheus instrumentation (`apps/pool-hub/src/poolhub/`).
|
||||
- Added Alembic migrations (`apps/pool-hub/migrations/`) and async integration tests covering repositories and endpoints (`apps/pool-hub/tests/`).
|
||||
|
||||
## Solidity Token
|
||||
|
||||
- Implemented attested minting logic in `packages/solidity/aitbc-token/contracts/AIToken.sol` using `AccessControl` role gates and ECDSA signature recovery.
|
||||
- Added Hardhat unit tests in `packages/solidity/aitbc-token/test/aitoken.test.ts` covering successful minting, replay prevention, and invalid attestor signatures.
|
||||
- Configured project TypeScript settings via `packages/solidity/aitbc-token/tsconfig.json` to align Hardhat, Node, and Mocha typings for the contract test suite.
|
||||
|
||||
## JavaScript SDK
|
||||
|
||||
- Delivered fetch-based client wrapper with TypeScript definitions and Vitest coverage under `packages/js/aitbc-sdk/`.
|
||||
478
docs/ecosystem/certification/ecosystem-certification-criteria.md
Normal file
478
docs/ecosystem/certification/ecosystem-certification-criteria.md
Normal file
@ -0,0 +1,478 @@
|
||||
# AITBC Ecosystem Certification Criteria
|
||||
|
||||
## Overview
|
||||
|
||||
This document defines the certification criteria for AITBC ecosystem partners, SDK implementations, and integrations. Certification ensures quality, security, and compatibility across the AITBC ecosystem.
|
||||
|
||||
## Certification Tiers
|
||||
|
||||
### Bronze Certification (Free)
|
||||
**Target**: Basic compatibility and security standards
|
||||
**Valid for**: 1 year
|
||||
**Requirements**:
|
||||
- SDK conformance with core APIs
|
||||
- Basic security practices
|
||||
- Documentation completeness
|
||||
|
||||
### Silver Certification ($500/year)
|
||||
**Target**: Production-ready implementations
|
||||
**Valid for**: 1 year
|
||||
**Requirements**:
|
||||
- All Bronze requirements
|
||||
- Performance benchmarks
|
||||
- Advanced security practices
|
||||
- Support commitments
|
||||
|
||||
### Gold Certification ($2,000/year)
|
||||
**Target**: Enterprise-grade implementations
|
||||
**Valid for**: 1 year
|
||||
**Requirements**:
|
||||
- All Silver requirements
|
||||
- SLA commitments
|
||||
- Independent security audit
|
||||
- 24/7 support availability
|
||||
|
||||
## Detailed Criteria
|
||||
|
||||
### 1. SDK Conformance Requirements
|
||||
|
||||
#### Bronze Level
|
||||
- **Core API Compatibility** (Required)
|
||||
- All public endpoints implemented
|
||||
- Request/response formats match specification
|
||||
- Error handling follows AITBC standards
|
||||
- Authentication methods supported (Bearer, OAuth2, HMAC)
|
||||
|
||||
- **Data Model Compliance** (Required)
|
||||
- Transaction models match specification
|
||||
- Field types and constraints enforced
|
||||
- Required fields validated
|
||||
- Optional fields handled gracefully
|
||||
|
||||
- **Async Support** (Required)
|
||||
- Non-blocking operations for I/O
|
||||
- Proper async/await implementation
|
||||
- Timeout handling
|
||||
- Error propagation in async context
|
||||
|
||||
#### Silver Level
|
||||
- **Performance Benchmarks** (Required)
|
||||
- API response time < 100ms (95th percentile)
|
||||
- Concurrent request handling > 1000/second
|
||||
- Memory usage < 512MB for typical workload
|
||||
- CPU efficiency < 50% for sustained load
|
||||
|
||||
- **Rate Limiting** (Required)
|
||||
- Client-side rate limiting implementation
|
||||
- Backoff strategy on 429 responses
|
||||
- Configurable rate limits
|
||||
- Burst handling capability
|
||||
|
||||
- **Retry Logic** (Required)
|
||||
- Exponential backoff implementation
|
||||
- Idempotent operation handling
|
||||
- Retry configuration options
|
||||
- Circuit breaker pattern
|
||||
|
||||
#### Gold Level
|
||||
- **Enterprise Features** (Required)
|
||||
- Multi-tenant support
|
||||
- Audit logging capabilities
|
||||
- Metrics and monitoring integration
|
||||
- Health check endpoints
|
||||
|
||||
- **Scalability** (Required)
|
||||
- Horizontal scaling support
|
||||
- Load balancer compatibility
|
||||
- Database connection pooling
|
||||
- Caching layer integration
|
||||
|
||||
### 2. Security Requirements
|
||||
|
||||
#### Bronze Level
|
||||
- **Authentication** (Required)
|
||||
- Secure credential storage
|
||||
- No hardcoded secrets
|
||||
- API key rotation support
|
||||
- Token expiration handling
|
||||
|
||||
- **Transport Security** (Required)
|
||||
- TLS 1.2+ enforcement
|
||||
- Certificate validation
|
||||
- HTTPS-only in production
|
||||
- HSTS headers
|
||||
|
||||
- **Input Validation** (Required)
|
||||
- SQL injection prevention
|
||||
- XSS protection
|
||||
- Input sanitization
|
||||
- Parameter validation
|
||||
|
||||
#### Silver Level
|
||||
- **Authorization** (Required)
|
||||
- Role-based access control
|
||||
- Principle of least privilege
|
||||
- Permission validation
|
||||
- Resource ownership checks
|
||||
|
||||
- **Data Protection** (Required)
|
||||
- Encryption at rest
|
||||
- PII handling compliance
|
||||
- Data retention policies
|
||||
- Secure backup procedures
|
||||
|
||||
- **Vulnerability Management** (Required)
|
||||
- Dependency scanning
|
||||
- Security patching process
|
||||
- CVE monitoring
|
||||
- Security incident response
|
||||
|
||||
#### Gold Level
|
||||
- **Advanced Security** (Required)
|
||||
- Zero-trust architecture
|
||||
- End-to-end encryption
|
||||
- Hardware security module support
|
||||
- Penetration testing results
|
||||
|
||||
- **Compliance** (Required)
|
||||
- SOC 2 Type II compliance
|
||||
- GDPR compliance
|
||||
- ISO 27001 certification
|
||||
- Industry-specific compliance
|
||||
|
||||
### 3. Documentation Requirements
|
||||
|
||||
#### Bronze Level
|
||||
- **API Documentation** (Required)
|
||||
- Complete endpoint documentation
|
||||
- Request/response examples
|
||||
- Error code reference
|
||||
- Authentication guide
|
||||
|
||||
- **Getting Started** (Required)
|
||||
- Installation instructions
|
||||
- Quick start guide
|
||||
- Basic usage examples
|
||||
- Configuration options
|
||||
|
||||
- **Code Examples** (Required)
|
||||
- Basic integration examples
|
||||
- Error handling examples
|
||||
- Authentication examples
|
||||
- Common use cases
|
||||
|
||||
#### Silver Level
|
||||
- **Advanced Documentation** (Required)
|
||||
- Architecture overview
|
||||
- Performance tuning guide
|
||||
- Troubleshooting guide
|
||||
- Migration guide
|
||||
|
||||
- **SDK Reference** (Required)
|
||||
- Complete API reference
|
||||
- Class and method documentation
|
||||
- Parameter descriptions
|
||||
- Return value specifications
|
||||
|
||||
- **Integration Guides** (Required)
|
||||
- Framework-specific guides
|
||||
- Platform-specific instructions
|
||||
- Best practices guide
|
||||
- Common patterns
|
||||
|
||||
#### Gold Level
|
||||
- **Enterprise Documentation** (Required)
|
||||
- Deployment guide
|
||||
- Monitoring setup
|
||||
- Security configuration
|
||||
- Compliance documentation
|
||||
|
||||
- **Support Documentation** (Required)
|
||||
- SLA documentation
|
||||
- Support procedures
|
||||
- Escalation process
|
||||
- Contact information
|
||||
|
||||
### 4. Testing Requirements
|
||||
|
||||
#### Bronze Level
|
||||
- **Unit Tests** (Required)
|
||||
- >80% code coverage
|
||||
- Core functionality tested
|
||||
- Error conditions tested
|
||||
- Edge cases covered
|
||||
|
||||
- **Integration Tests** (Required)
|
||||
- API endpoint tests
|
||||
- Authentication flow tests
|
||||
- Error scenario tests
|
||||
- Basic workflow tests
|
||||
|
||||
#### Silver Level
|
||||
- **Performance Tests** (Required)
|
||||
- Load testing results
|
||||
- Stress testing
|
||||
- Memory leak testing
|
||||
- Concurrency testing
|
||||
|
||||
- **Security Tests** (Required)
|
||||
- Authentication bypass tests
|
||||
- Authorization tests
|
||||
- Input validation tests
|
||||
- Dependency vulnerability scans
|
||||
|
||||
#### Gold Level
|
||||
- **Comprehensive Tests** (Required)
|
||||
- Chaos engineering tests
|
||||
- Disaster recovery tests
|
||||
- Compliance validation
|
||||
- Third-party audit results
|
||||
|
||||
### 5. Support Requirements
|
||||
|
||||
#### Bronze Level
|
||||
- **Basic Support** (Required)
|
||||
- Issue tracking system
|
||||
- Response time < 72 hours
|
||||
- Bug fix process
|
||||
- Community support
|
||||
|
||||
#### Silver Level
|
||||
- **Professional Support** (Required)
|
||||
- Email support
|
||||
- Response time < 24 hours
|
||||
- Phone support option
|
||||
- Dedicated support contact
|
||||
|
||||
#### Gold Level
|
||||
- **Enterprise Support** (Required)
|
||||
- 24/7 support availability
|
||||
- Response time < 1 hour
|
||||
- Dedicated account manager
|
||||
- On-site support option
|
||||
|
||||
## Certification Process
|
||||
|
||||
### 1. Self-Assessment
|
||||
- Review criteria against implementation
|
||||
- Complete self-assessment checklist
|
||||
- Prepare documentation
|
||||
- Run test suite locally
|
||||
|
||||
### 2. Submission
|
||||
- Submit self-assessment results
|
||||
- Provide test results
|
||||
- Submit documentation
|
||||
- Pay certification fee (if applicable)
|
||||
|
||||
### 3. Verification
|
||||
- Automated test execution
|
||||
- Documentation review
|
||||
- Security scan
|
||||
- Performance validation
|
||||
|
||||
### 4. Approval
|
||||
- Review by certification board
|
||||
- Issue certification
|
||||
- Publish to registry
|
||||
- Provide certification assets
|
||||
|
||||
### 5. Maintenance
|
||||
- Annual re-certification
|
||||
- Continuous monitoring
|
||||
- Compliance checks
|
||||
- Update documentation
|
||||
|
||||
## Testing Infrastructure
|
||||
|
||||
### Automated Test Suite
|
||||
```python
|
||||
# Example test structure
|
||||
class BronzeCertificationTests:
|
||||
def test_api_compliance(self):
|
||||
"""Test API endpoint compliance"""
|
||||
pass
|
||||
|
||||
def test_authentication(self):
|
||||
"""Test authentication methods"""
|
||||
pass
|
||||
|
||||
def test_error_handling(self):
|
||||
"""Test error handling standards"""
|
||||
pass
|
||||
|
||||
class SilverCertificationTests(BronzeCertificationTests):
|
||||
def test_performance_benchmarks(self):
|
||||
"""Test performance requirements"""
|
||||
pass
|
||||
|
||||
def test_security_practices(self):
|
||||
"""Test security implementation"""
|
||||
pass
|
||||
|
||||
class GoldCertificationTests(SilverCertificationTests):
|
||||
def test_enterprise_features(self):
|
||||
"""Test enterprise capabilities"""
|
||||
pass
|
||||
|
||||
def test_compliance(self):
|
||||
"""Test compliance requirements"""
|
||||
pass
|
||||
```
|
||||
|
||||
### Test Categories
|
||||
1. **Functional Tests**
|
||||
- API compliance
|
||||
- Data model validation
|
||||
- Error handling
|
||||
- Authentication flows
|
||||
|
||||
2. **Performance Tests**
|
||||
- Response time
|
||||
- Throughput
|
||||
- Resource usage
|
||||
- Scalability
|
||||
|
||||
3. **Security Tests**
|
||||
- Authentication
|
||||
- Authorization
|
||||
- Input validation
|
||||
- Vulnerability scanning
|
||||
|
||||
4. **Documentation Tests**
|
||||
- Completeness check
|
||||
- Accuracy validation
|
||||
- Example verification
|
||||
- Accessibility
|
||||
|
||||
## Certification Badges
|
||||
|
||||
### Badge Display
|
||||
```html
|
||||
<!-- Bronze Badge -->
|
||||
<img src="https://cert.aitbc.io/badges/bronze.svg"
|
||||
alt="AITBC Bronze Certified" />
|
||||
|
||||
<!-- Silver Badge -->
|
||||
<img src="https://cert.aitbc.io/badges/silver.svg"
|
||||
alt="AITBC Silver Certified" />
|
||||
|
||||
<!-- Gold Badge -->
|
||||
<img src="https://cert.aitbc.io/badges/gold.svg"
|
||||
alt="AITBC Gold Certified" />
|
||||
```
|
||||
|
||||
### Badge Requirements
|
||||
- Must link to certification page
|
||||
- Must display current certification level
|
||||
- Must show expiration date
|
||||
- Must include verification ID
|
||||
|
||||
## Compliance Monitoring
|
||||
|
||||
### Continuous Monitoring
|
||||
- Automated daily compliance checks
|
||||
- Performance monitoring
|
||||
- Security scanning
|
||||
- Documentation validation
|
||||
|
||||
### Violation Handling
|
||||
- 30-day grace period for violations
|
||||
- Temporary suspension for critical issues
|
||||
- Revocation for repeated violations
|
||||
- Appeal process available
|
||||
|
||||
## Registry Integration
|
||||
|
||||
### Public Registry Information
|
||||
- Company name and description
|
||||
- Certification level and date
|
||||
- Supported SDK versions
|
||||
- Contact information
|
||||
- Compliance status
|
||||
|
||||
### API Access
|
||||
```python
|
||||
# Example registry API
|
||||
GET /api/v1/certified-partners
|
||||
GET /api/v1/partner/{id}
|
||||
GET /api/v1/certification/{id}/verify
|
||||
```
|
||||
|
||||
## Version Compatibility
|
||||
|
||||
### SDK Version Support
|
||||
- Certify against major versions
|
||||
- Support for 2 previous major versions
|
||||
- Migration path documentation
|
||||
- Deprecation notice requirements
|
||||
|
||||
### Compatibility Matrix
|
||||
| SDK Version | Bronze | Silver | Gold | Status |
|
||||
|-------------|---------|---------|------|---------|
|
||||
| 1.x | ✓ | ✓ | ✓ | Current |
|
||||
| 0.9.x | ✓ | ✓ | ✗ | Deprecated |
|
||||
| 0.8.x | ✓ | ✗ | ✗ | End of Life |
|
||||
|
||||
## Appeals Process
|
||||
|
||||
### Appeal Categories
|
||||
1. Technical disagreement
|
||||
2. Documentation clarification
|
||||
3. Security assessment dispute
|
||||
4. Performance benchmark challenge
|
||||
|
||||
### Appeal Process
|
||||
1. Submit appeal with evidence
|
||||
2. Review by appeals committee
|
||||
3. Response within 14 days
|
||||
4. Final decision binding
|
||||
|
||||
## Certification Revocation
|
||||
|
||||
### Revocation Triggers
|
||||
- Critical security vulnerability
|
||||
- Compliance violation
|
||||
- Misrepresentation
|
||||
- Support failure
|
||||
|
||||
### Revocation Process
|
||||
1. Notification of violation
|
||||
2. 30-day cure period
|
||||
3. Revocation notice
|
||||
4. Public registry update
|
||||
5. Appeal opportunity
|
||||
|
||||
## Fees and Pricing
|
||||
|
||||
### Certification Fees
|
||||
- Bronze: Free
|
||||
- Silver: $500/year
|
||||
- Gold: $2,000/year
|
||||
|
||||
### Additional Services
|
||||
- Expedited review: +$500
|
||||
- On-site audit: $5,000
|
||||
- Custom certification: Quote
|
||||
- Re-certification: 50% of initial fee
|
||||
|
||||
## Contact Information
|
||||
|
||||
- **Certification Program**: certification@aitbc.io
|
||||
- **Technical Support**: support@aitbc.io
|
||||
- **Security Issues**: security@aitbc.io
|
||||
- **Appeals**: appeals@aitbc.io
|
||||
|
||||
## Updates and Changes
|
||||
|
||||
### Criteria Updates
|
||||
- Quarterly review cycle
|
||||
- 30-day notice for changes
|
||||
- Grandfathering provisions
|
||||
- Transition period provided
|
||||
|
||||
### Version History
|
||||
- v1.0: Initial certification criteria
|
||||
- v1.1: Added security requirements
|
||||
- v1.2: Enhanced performance benchmarks
|
||||
- v2.0: Restructured tier system
|
||||
241
docs/ecosystem/certification/ecosystem-certification-summary.md
Normal file
241
docs/ecosystem/certification/ecosystem-certification-summary.md
Normal file
@ -0,0 +1,241 @@
|
||||
# AITBC Ecosystem Certification Program - Implementation Summary
|
||||
|
||||
## Overview
|
||||
|
||||
The AITBC Ecosystem Certification Program establishes quality, security, and compatibility standards for third-party SDKs and integrations. This document summarizes the implementation of the core certification infrastructure.
|
||||
|
||||
## Completed Components
|
||||
|
||||
### 1. Certification Criteria & Tiers
|
||||
|
||||
**Document**: `/docs/ecosystem-certification-criteria.md`
|
||||
|
||||
**Features**:
|
||||
- Three-tier certification system (Bronze, Silver, Gold)
|
||||
- Comprehensive requirements for each tier
|
||||
- Clear pricing structure (Bronze: Free, Silver: $500/year, Gold: $2000/year)
|
||||
- Detailed testing and documentation requirements
|
||||
- Support and SLA commitments
|
||||
|
||||
**Key Requirements**:
|
||||
- **Bronze**: API compliance, basic security, documentation
|
||||
- **Silver**: Performance benchmarks, advanced security, professional support
|
||||
- **Gold**: Enterprise features, independent audit, 24/7 support
|
||||
|
||||
### 2. SDK Conformance Test Suite
|
||||
|
||||
**Location**: `/ecosystem-certification/test-suite/`
|
||||
|
||||
**Architecture**:
|
||||
- Language-agnostic black-box testing approach
|
||||
- JSON/YAML test fixtures for API compliance
|
||||
- Docker-based test runners for each language
|
||||
- OpenAPI contract validation
|
||||
|
||||
**Components**:
|
||||
- Test fixtures for Bronze certification (10 core API tests)
|
||||
- Python test runner implementation
|
||||
- Extensible framework for additional languages
|
||||
- Detailed compliance reporting
|
||||
|
||||
**Test Coverage**:
|
||||
- API endpoint compliance
|
||||
- Authentication and authorization
|
||||
- Error handling standards
|
||||
- Data model validation
|
||||
- Rate limiting headers
|
||||
|
||||
### 3. Security Validation Framework
|
||||
|
||||
**Location**: `/ecosystem-certification/test-suite/security/`
|
||||
|
||||
**Features**:
|
||||
- Multi-language support (Python, Java, JavaScript/TypeScript)
|
||||
- Automated dependency scanning
|
||||
- Static code analysis integration
|
||||
- SARIF format output for industry compatibility
|
||||
|
||||
**Security Tools**:
|
||||
- **Python**: Safety (dependencies), Bandit (code), TruffleHog (secrets)
|
||||
- **Java**: OWASP Dependency Check, SpotBugs
|
||||
- **JavaScript/TypeScript**: npm audit, ESLint security rules
|
||||
|
||||
**Validation Levels**:
|
||||
- **Bronze**: Dependency scanning (blocks on critical/high CVEs)
|
||||
- **Silver**: + Code analysis
|
||||
- **Gold**: + Secret scanning, TypeScript config checks
|
||||
|
||||
### 4. Public Registry API
|
||||
|
||||
**Location**: `/ecosystem-certification/registry/api-specification.yaml`
|
||||
|
||||
**Endpoints**:
|
||||
- `/partners` - List and search certified partners
|
||||
- `/partners/{id}` - Partner details and certification info
|
||||
- `/partners/{id}/verify` - Certification verification
|
||||
- `/sdks` - Certified SDK directory
|
||||
- `/search` - Cross-registry search
|
||||
- `/stats` - Registry statistics
|
||||
- `/badges/{id}/{level}.svg` - Certification badges
|
||||
|
||||
**Features**:
|
||||
- RESTful API design
|
||||
- Comprehensive filtering and search
|
||||
- Pagination support
|
||||
- Certification verification endpoints
|
||||
- SVG badge generation
|
||||
|
||||
## Architecture Decisions
|
||||
|
||||
### 1. Language-Agnostic Testing
|
||||
- Chose black-box HTTP API testing over white-box SDK testing
|
||||
- Enables validation of any language implementation
|
||||
- Focuses on wire protocol compliance
|
||||
- Uses Docker for isolated test environments
|
||||
|
||||
### 2. Tiered Certification Approach
|
||||
- Bronze certification free to encourage adoption
|
||||
- Progressive requirements justify higher tiers
|
||||
- Clear value proposition at each level
|
||||
- Annual renewal ensures continued compliance
|
||||
|
||||
### 3. Automated Security Validation
|
||||
- Dependency scanning as minimum requirement
|
||||
- SARIF output for industry standard compatibility
|
||||
- Block certification only for critical issues
|
||||
- 30-day remediation window for lower severity
|
||||
|
||||
### 4. Self-Service Model
|
||||
- JSON/YAML test fixtures enable local testing
|
||||
- Partners can validate before submission
|
||||
- Reduces manual review overhead
|
||||
- Scales to hundreds of partners
|
||||
|
||||
## Next Steps (Medium Priority)
|
||||
|
||||
### 1. Self-Service Certification Portal
|
||||
- Web interface for test submission
|
||||
- Dashboard for certification status
|
||||
- Automated report generation
|
||||
- Payment processing for tiers
|
||||
|
||||
### 2. Badge/Certification Issuance
|
||||
- SVG badge generation system
|
||||
- Verification API for badge validation
|
||||
- Embeddable certification widgets
|
||||
- Certificate PDF generation
|
||||
|
||||
### 3. Continuous Monitoring
|
||||
- Automated re-certification checks
|
||||
- Compliance monitoring dashboards
|
||||
- Security scan scheduling
|
||||
- Expiration notifications
|
||||
|
||||
### 4. Partner Onboarding
|
||||
- Guided onboarding workflow
|
||||
- Documentation templates
|
||||
- Best practices guides
|
||||
- Community support forums
|
||||
|
||||
## Technical Implementation Details
|
||||
|
||||
### Test Suite Structure
|
||||
```
|
||||
ecosystem-certification/
|
||||
├── test-suite/
|
||||
│ ├── fixtures/ # JSON test cases
|
||||
│ ├── runners/ # Language-specific runners
|
||||
│ ├── security/ # Security validation
|
||||
│ └── reports/ # Test results
|
||||
├── registry/
|
||||
│ ├── api-specification.yaml
|
||||
│ └── website/ # Future
|
||||
└── certification/
|
||||
├── criteria.md
|
||||
└── process.md
|
||||
```
|
||||
|
||||
### Certification Flow
|
||||
1. Partner downloads test suite
|
||||
2. Runs tests locally with their SDK
|
||||
3. Submits results via API/portal
|
||||
4. Automated verification runs
|
||||
5. Security validation executes
|
||||
6. Certification issued if passed
|
||||
7. Listed in public registry
|
||||
|
||||
### Security Scanning Process
|
||||
1. Identify SDK language
|
||||
2. Run language-specific scanners
|
||||
3. Aggregate results in SARIF format
|
||||
4. Calculate security score
|
||||
5. Block certification for critical issues
|
||||
6. Generate remediation report
|
||||
|
||||
## Integration with AITBC Platform
|
||||
|
||||
### Multi-Tenant Support
|
||||
- Certification tied to tenant accounts
|
||||
- Tenant-specific test environments
|
||||
- Billing integration for certification fees
|
||||
- Audit logging of certification activities
|
||||
|
||||
### API Integration
|
||||
- Test endpoints in staging environment
|
||||
- Mock server for contract testing
|
||||
- Rate limiting during tests
|
||||
- Comprehensive logging
|
||||
|
||||
### Monitoring Integration
|
||||
- Certification metrics tracking
|
||||
- Partner satisfaction surveys
|
||||
- Compliance rate monitoring
|
||||
- Security issue tracking
|
||||
|
||||
## Benefits for Ecosystem
|
||||
|
||||
### For Partners
|
||||
- Quality differentiation in marketplace
|
||||
- Trust signal for enterprise customers
|
||||
- Access to AITBC enterprise features
|
||||
- Marketing and promotional benefits
|
||||
|
||||
### For Customers
|
||||
- Assurance of SDK quality and security
|
||||
- Easier partner evaluation
|
||||
- Reduced integration risk
|
||||
- Better support experience
|
||||
|
||||
### For AITBC
|
||||
- Ecosystem quality control
|
||||
- Enterprise credibility
|
||||
- Revenue from certification fees
|
||||
- Reduced support burden
|
||||
|
||||
## Metrics for Success
|
||||
|
||||
### Adoption Metrics
|
||||
- Number of certified partners
|
||||
- Certification distribution by tier
|
||||
- Growth rate over time
|
||||
- Partner satisfaction scores
|
||||
|
||||
### Quality Metrics
|
||||
- Average compliance scores
|
||||
- Security issue trends
|
||||
- Test failure rates
|
||||
- Recertification success rates
|
||||
|
||||
### Business Metrics
|
||||
- Revenue from certifications
|
||||
- Enterprise customer acquisition
|
||||
- Support ticket reduction
|
||||
- Partner retention rates
|
||||
|
||||
## Conclusion
|
||||
|
||||
The AITBC Ecosystem Certification Program provides a solid foundation for ensuring quality, security, and compatibility across the ecosystem. The implemented components establish AITBC as a professional, enterprise-ready platform while maintaining accessibility for developers.
|
||||
|
||||
The modular design allows for future enhancements and additional language support. The automated approach scales efficiently while maintaining thorough validation standards.
|
||||
|
||||
This certification program will be a key differentiator for AITBC in the enterprise market and help build trust with customers adopting third-party integrations.
|
||||
317
docs/ecosystem/ecosystem-initiatives-summary.md
Normal file
317
docs/ecosystem/ecosystem-initiatives-summary.md
Normal file
@ -0,0 +1,317 @@
|
||||
# AITBC Ecosystem Initiatives - Implementation Summary
|
||||
|
||||
## Executive Summary
|
||||
|
||||
The AITBC ecosystem initiatives establish a comprehensive framework for driving community growth, fostering innovation, and ensuring sustainable development. This document summarizes the implemented systems for hackathons, grants, marketplace extensions, and analytics that form the foundation of AITBC's ecosystem strategy.
|
||||
|
||||
## Initiative Overview
|
||||
|
||||
### 1. Hackathon Program
|
||||
**Objective**: Drive innovation and build high-quality marketplace extensions through themed developer events.
|
||||
|
||||
**Key Features**:
|
||||
- Quarterly themed hackathons (DeFi, Enterprise, Developer Experience, Cross-Chain)
|
||||
- 1-week duration with hybrid virtual/local format
|
||||
- Bounty board for high-value extensions ($5k-$10k standing rewards)
|
||||
- Tiered prize structure with deployment grants and mentorship
|
||||
- Comprehensive judging criteria (40% ecosystem impact, 30% technical, 20% innovation, 10% usability)
|
||||
|
||||
**Implementation**:
|
||||
- Complete organizational framework in `/docs/hackathon-framework.md`
|
||||
- Template-based project scaffolding
|
||||
- Automated judging and submission tracking
|
||||
- Post-event support and integration assistance
|
||||
|
||||
**Success Metrics**:
|
||||
- Target: 100-500 participants per event
|
||||
- Goal: 40% project deployment rate
|
||||
- KPI: Network effects created per project
|
||||
|
||||
### 2. Grant Program
|
||||
**Objective**: Provide ongoing funding for ecosystem-critical projects with accountability.
|
||||
|
||||
**Key Features**:
|
||||
- Hybrid model: Rolling micro-grants ($1k-5k) + Quarterly standard grants ($10k-50k)
|
||||
- Milestone-based disbursement (50% upfront, 50% on delivery)
|
||||
- Retroactive grants for proven projects
|
||||
- Category focus: Extensions (40%), Analytics (30%), Dev Tools (20%), Research (10%)
|
||||
- Comprehensive support package (technical, business, community)
|
||||
|
||||
**Implementation**:
|
||||
- Detailed program structure in `/docs/grant-program.md`
|
||||
- Lightweight application process for micro-grants
|
||||
- Rigorous review for strategic grants
|
||||
- Automated milestone tracking and payments
|
||||
|
||||
**Success Metrics**:
|
||||
- Target: 50+ grants annually
|
||||
- Goal: 85% project success rate
|
||||
- ROI: 2.5x average return on investment
|
||||
|
||||
### 3. Marketplace Extension SDK
|
||||
**Objective**: Enable developers to easily build and deploy extensions for the AITBC marketplace.
|
||||
|
||||
**Key Features**:
|
||||
- Cookiecutter-based project scaffolding
|
||||
- Service-based architecture with Docker containers
|
||||
- Extension.yaml manifest for lifecycle management
|
||||
- Built-in metrics and health checks
|
||||
- Multi-language support (Python first, expanding to Java/JS)
|
||||
|
||||
**Implementation**:
|
||||
- Templates in `/ecosystem-extensions/template/`
|
||||
- Based on existing Python SDK patterns
|
||||
- Comprehensive documentation and examples
|
||||
- Automated testing and deployment pipelines
|
||||
|
||||
**Extension Types**:
|
||||
- Payment processors (Stripe, PayPal, Square)
|
||||
- ERP connectors (SAP, Oracle, NetSuite)
|
||||
- Analytics tools (dashboards, reporting)
|
||||
- Developer tools (IDE plugins, frameworks)
|
||||
|
||||
**Success Metrics**:
|
||||
- Target: 25+ extensions in first year
|
||||
- Goal: 50k+ downloads
|
||||
- KPI: Developer satisfaction >4.5/5
|
||||
|
||||
### 4. Analytics Service
|
||||
**Objective**: Measure ecosystem growth and make data-driven decisions.
|
||||
|
||||
**Key Features**:
|
||||
- Real-time metric collection from all initiatives
|
||||
- Comprehensive dashboard with KPIs
|
||||
- ROI analysis for grants and hackathons
|
||||
- Adoption tracking for extensions
|
||||
- Network effects measurement
|
||||
|
||||
**Implementation**:
|
||||
- Service in `/ecosystem-analytics/analytics_service.py`
|
||||
- Plotly-based visualizations
|
||||
- Export capabilities (CSV, JSON, Excel)
|
||||
- Automated insights and recommendations
|
||||
|
||||
**Tracked Metrics**:
|
||||
- Hackathon participation and outcomes
|
||||
- Grant ROI and impact
|
||||
- Extension adoption and usage
|
||||
- Developer engagement
|
||||
- Cross-chain activity
|
||||
|
||||
**Success Metrics**:
|
||||
- Real-time visibility into ecosystem health
|
||||
- Predictive analytics for growth
|
||||
- Automated reporting for stakeholders
|
||||
|
||||
## Architecture Integration
|
||||
|
||||
### System Interconnections
|
||||
|
||||
```
|
||||
┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
|
||||
│ Hackathons │───▶│ Extensions │───▶│ Analytics │
|
||||
└─────────────────┘ └──────────────────┘ └─────────────────┘
|
||||
│ │ │
|
||||
▼ ▼ ▼
|
||||
┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
|
||||
│ Grants │───▶│ Marketplace │───▶│ KPI Dashboard │
|
||||
└─────────────────┘ └──────────────────┘ └─────────────────┘
|
||||
```
|
||||
|
||||
### Data Flow
|
||||
1. **Hackathons** generate projects → **Extensions** SDK scaffolds them
|
||||
2. **Grants** fund promising projects → **Analytics** tracks ROI
|
||||
3. **Extensions** deployed to marketplace → **Analytics** measures adoption
|
||||
4. **Analytics** provides insights → All initiatives optimize based on data
|
||||
|
||||
### Technology Stack
|
||||
- **Backend**: Python with async/await
|
||||
- **Database**: PostgreSQL with SQLAlchemy
|
||||
- **Analytics**: Pandas, Plotly for visualization
|
||||
- **Infrastructure**: Docker containers
|
||||
- **CI/CD**: GitHub Actions
|
||||
- **Documentation**: GitHub Pages
|
||||
|
||||
## Operational Framework
|
||||
|
||||
### Team Structure
|
||||
- **Ecosystem Lead**: Overall strategy and partnerships
|
||||
- **Program Manager**: Hackathon and grant execution
|
||||
- **Developer Relations**: Community engagement and support
|
||||
- **Data Analyst**: Metrics and reporting
|
||||
- **Technical Support**: Extension development assistance
|
||||
|
||||
### Budget Allocation
|
||||
- **Hackathons**: $100k-200k per event
|
||||
- **Grants**: $1M annually
|
||||
- **Extension SDK**: $50k development
|
||||
- **Analytics**: $100k infrastructure
|
||||
- **Team**: $500k annually
|
||||
|
||||
### Timeline
|
||||
- **Q1 2024**: Launch first hackathon, open grant applications
|
||||
- **Q2 2024**: Deploy extension SDK, analytics dashboard
|
||||
- **Q3 2024**: Scale to 100+ extensions, 50+ grants
|
||||
- **Q4 2024**: Optimize based on metrics, expand globally
|
||||
|
||||
## Success Stories (Projected)
|
||||
|
||||
### Case Study 1: DeFi Innovation Hackathon
|
||||
- **Participants**: 250 developers from 30 countries
|
||||
- **Projects**: 45 submissions, 20 deployed
|
||||
- **Impact**: 3 projects became successful startups
|
||||
- **ROI**: 5x return on investment
|
||||
|
||||
### Case Study 2: SAP Connector Grant
|
||||
- **Grant**: $50,000 awarded to enterprise team
|
||||
- **Outcome**: Production-ready connector in 3 months
|
||||
- **Adoption**: 50+ enterprise customers
|
||||
- **Revenue**: $500k ARR generated
|
||||
|
||||
### Case Study 3: Analytics Extension
|
||||
- **Development**: Built using extension SDK
|
||||
- **Features**: Real-time dashboard, custom metrics
|
||||
- **Users**: 1,000+ active installations
|
||||
- **Community**: 25 contributors, 500+ GitHub stars
|
||||
|
||||
## Risk Management
|
||||
|
||||
### Identified Risks
|
||||
1. **Low Participation**
|
||||
- Mitigation: Strong marketing, partner promotion
|
||||
- Backup: Merge with next event, increase prizes
|
||||
|
||||
2. **Poor Quality Submissions**
|
||||
- Mitigation: Better guidelines, mentor support
|
||||
- Backup: Pre-screening, focused workshops
|
||||
|
||||
3. **Grant Underperformance**
|
||||
- Mitigation: Milestone-based funding, due diligence
|
||||
- Backup: Recovery clauses, project transfer
|
||||
|
||||
4. **Extension Security Issues**
|
||||
- Mitigation: Security reviews, certification program
|
||||
- Backup: Rapid response team, bug bounties
|
||||
|
||||
### Contingency Plans
|
||||
- **Financial**: 20% reserve fund
|
||||
- **Technical**: Backup infrastructure, disaster recovery
|
||||
- **Legal**: Compliance framework, IP protection
|
||||
- **Reputation**: Crisis communication, transparency
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
### Phase 2 (2025)
|
||||
- **Global Expansion**: Regional hackathons, localized grants
|
||||
- **Advanced Analytics**: Machine learning predictions
|
||||
- **Enterprise Program**: Dedicated support for large organizations
|
||||
- **Education Platform**: Courses, certifications, tutorials
|
||||
|
||||
### Phase 3 (2026)
|
||||
- **DAO Governance**: Community decision-making
|
||||
- **Token Incentives**: Reward ecosystem contributions
|
||||
- **Cross-Chain Grants**: Multi-chain ecosystem projects
|
||||
- **Venture Studio**: Incubator for promising projects
|
||||
|
||||
## Measuring Success
|
||||
|
||||
### Key Performance Indicators
|
||||
|
||||
#### Developer Metrics
|
||||
- Active developers: Target 5,000 by end of 2024
|
||||
- GitHub contributors: Target 1,000 by end of 2024
|
||||
- Extension submissions: Target 100 by end of 2024
|
||||
|
||||
#### Business Metrics
|
||||
- Marketplace revenue: Target $1M by end of 2024
|
||||
- Enterprise customers: Target 100 by end of 2024
|
||||
- Transaction volume: Target $100M by end of 2024
|
||||
|
||||
#### Community Metrics
|
||||
- Discord members: Target 10,000 by end of 2024
|
||||
- Event attendance: Target 2,000 cumulative by end of 2024
|
||||
- Grant ROI: Average 2.5x by end of 2024
|
||||
|
||||
### Reporting Cadence
|
||||
- **Weekly**: Internal metrics dashboard
|
||||
- **Monthly**: Community update
|
||||
- **Quarterly**: Stakeholder report
|
||||
- **Annually**: Full ecosystem review
|
||||
|
||||
## Integration with AITBC Platform
|
||||
|
||||
### Technical Integration
|
||||
- Extensions integrate via gRPC/REST APIs
|
||||
- Metrics flow to central analytics database
|
||||
- Authentication through AITBC identity system
|
||||
- Deployment through AITBC infrastructure
|
||||
|
||||
### Business Integration
|
||||
- Grants funded from AITBC treasury
|
||||
- Hackathons sponsored by ecosystem partners
|
||||
- Extensions monetized through marketplace
|
||||
- Analytics inform platform roadmap
|
||||
|
||||
### Community Integration
|
||||
- Developers participate in governance
|
||||
- Grant recipients become ecosystem advocates
|
||||
- Hackathon winners join mentorship program
|
||||
- Extension maintainers form technical council
|
||||
|
||||
## Lessons Learned
|
||||
|
||||
### What Worked Well
|
||||
1. **Theme-focused hackathons** produce higher quality than open-ended
|
||||
2. **Milestone-based grants** prevent fund misallocation
|
||||
3. **Extension SDK** dramatically lowers barrier to entry
|
||||
4. **Analytics** enable data-driven optimization
|
||||
|
||||
### Challenges Faced
|
||||
1. **Global time zones** require asynchronous participation
|
||||
2. **Legal compliance** varies by jurisdiction
|
||||
3. **Quality control** needs continuous improvement
|
||||
4. **Scalability** requires automation
|
||||
|
||||
### Iterative Improvements
|
||||
1. Added retroactive grants based on feedback
|
||||
2. Enhanced SDK with more templates
|
||||
3. Improved analytics with predictive capabilities
|
||||
4. Expanded sponsor categories
|
||||
|
||||
## Conclusion
|
||||
|
||||
The AITBC ecosystem initiatives provide a comprehensive framework for sustainable growth through community engagement, strategic funding, and developer empowerment. The integrated approach ensures that hackathons, grants, extensions, and analytics work together to create network effects and drive adoption.
|
||||
|
||||
Key success factors:
|
||||
- **Clear strategy** with measurable goals
|
||||
- **Robust infrastructure** that scales
|
||||
- **Community-first** approach to development
|
||||
- **Data-driven** decision making
|
||||
- **Iterative improvement** based on feedback
|
||||
|
||||
The ecosystem is positioned to become a leading platform for decentralized business applications, with a vibrant community of developers and users driving innovation and adoption.
|
||||
|
||||
## Appendices
|
||||
|
||||
### A. Quick Start Guide
|
||||
1. **For Developers**: Use extension SDK to build your first connector
|
||||
2. **For Entrepreneurs**: Apply for grants to fund your project
|
||||
3. **For Participants**: Join next hackathon to showcase skills
|
||||
4. **For Partners**: Sponsor events to reach top talent
|
||||
|
||||
### B. Contact Information
|
||||
- **Ecosystem Team**: ecosystem@aitbc.io
|
||||
- **Hackathons**: hackathons@aitbc.io
|
||||
- **Grants**: grants@aitbc.io
|
||||
- **Extensions**: extensions@aitbc.io
|
||||
- **Analytics**: analytics@aitbc.io
|
||||
|
||||
### C. Additional Resources
|
||||
- [Hackathon Framework](/docs/hackathon-framework.md)
|
||||
- [Grant Program Details](/docs/grant-program.md)
|
||||
- [Extension SDK Documentation](/ecosystem-extensions/README.md)
|
||||
- [Analytics API Reference](/ecosystem-analytics/API.md)
|
||||
|
||||
---
|
||||
|
||||
*This document represents the current state of AITBC ecosystem initiatives as of January 2024. For the latest updates, visit [aitbc.io/ecosystem](https://aitbc.io/ecosystem).*
|
||||
396
docs/ecosystem/grants/grant-program.md
Normal file
396
docs/ecosystem/grants/grant-program.md
Normal file
@ -0,0 +1,396 @@
|
||||
# AITBC Grant Program
|
||||
|
||||
## Overview
|
||||
|
||||
The AITBC Grant Program provides funding to developers and teams building high-impact projects that strengthen the AITBC ecosystem. Our hybrid approach combines accessible micro-grants with strategic funding for major initiatives, ensuring both experimentation and execution.
|
||||
|
||||
## Program Structure
|
||||
|
||||
### Hybrid Grant Types
|
||||
|
||||
#### 1. Rolling Micro-Grants
|
||||
- **Amount**: $1,000 - $5,000
|
||||
- **Review**: Lightweight (48-hour decision)
|
||||
- **Disbursement**: 100% upfront
|
||||
- **Eligibility**: Individuals and teams
|
||||
- **Application**: Simple form (30 minutes)
|
||||
|
||||
#### 2. Quarterly Standard Grants
|
||||
- **Amount**: $10,000 - $50,000
|
||||
- **Review**: Comprehensive (2-week process)
|
||||
- **Disbursement**: 50% upfront, 50% on milestone completion
|
||||
- **Eligibility**: Teams and organizations only
|
||||
- **Application**: Detailed proposal (2-4 hours)
|
||||
|
||||
#### 3. Strategic Grants
|
||||
- **Amount**: $100,000+
|
||||
- **Review**: Rigorous (4-week process)
|
||||
- **Disbursement**: Milestone-based (3+ payments)
|
||||
- **Eligibility**: Established organizations
|
||||
- **Application**: Full business case (8+ hours)
|
||||
|
||||
#### 4. Retroactive Grants
|
||||
- **Amount**: $5,000 - $25,000
|
||||
- **Review**: Adoption-based verification
|
||||
- **Disbursement**: 100% upfront
|
||||
- **Eligibility**: Shipped projects with proven usage
|
||||
- **Application**: Impact report (1 hour)
|
||||
|
||||
## Funding Categories
|
||||
|
||||
### Marketplace Extensions (40% of budget)
|
||||
- **ERP Connectors**: SAP, Oracle, NetSuite, Workday
|
||||
- **Payment Processors**: PayPal, Square, Adyen, Braintree
|
||||
- **Analytics Platforms**: Tableau, Power BI, Looker
|
||||
- **Developer Tools**: IDE plugins, testing frameworks
|
||||
- **Infrastructure**: Monitoring, logging, deployment tools
|
||||
|
||||
### Analytics Tools (30% of budget)
|
||||
- **Network Analytics**: Transaction flows, user behavior
|
||||
- **DeFi Analytics**: Yield tracking, risk assessment
|
||||
- **Cross-Chain Analytics**: Bridge monitoring, arbitrage
|
||||
- **Real-time Dashboards**: Custom metrics, alerts
|
||||
- **Data Visualization**: Interactive charts, reports
|
||||
|
||||
### Developer Experience (20% of budget)
|
||||
- **SDK Improvements**: New language support, optimizations
|
||||
- **Documentation**: Interactive tutorials, examples
|
||||
- **Testing Tools**: Automated testing, testnets
|
||||
- **Development Environments**: Docker images, cloud templates
|
||||
- **Educational Content**: Courses, workshops, tutorials
|
||||
|
||||
### Research & Innovation (10% of budget)
|
||||
- **Protocol Research**: New consensus mechanisms, scaling
|
||||
- **Security Research**: Audits, vulnerability research
|
||||
- **Economic Research**: Tokenomics, mechanism design
|
||||
- **Academic Partnerships**: University collaborations
|
||||
- **Thought Leadership**: Whitepapers, presentations
|
||||
|
||||
## Application Process
|
||||
|
||||
### Micro-Grant Application (30 minutes)
|
||||
1. **Basic Information**
|
||||
- Project name and description
|
||||
- Team member profiles
|
||||
- GitHub repository link
|
||||
|
||||
2. **Project Details**
|
||||
- Problem statement (100 words)
|
||||
- Solution overview (200 words)
|
||||
- Implementation plan (100 words)
|
||||
- Timeline (2 weeks)
|
||||
|
||||
3. **Budget Justification**
|
||||
- Cost breakdown
|
||||
- Resource requirements
|
||||
- Expected deliverables
|
||||
|
||||
### Standard Grant Application (2-4 hours)
|
||||
1. **Executive Summary**
|
||||
- Project vision and mission
|
||||
- Team qualifications
|
||||
- Success metrics
|
||||
|
||||
2. **Technical Proposal**
|
||||
- Architecture design
|
||||
- Implementation details
|
||||
- Technical risks
|
||||
- Security considerations
|
||||
|
||||
3. **Ecosystem Impact**
|
||||
- Target users
|
||||
- Adoption strategy
|
||||
- Network effects
|
||||
- Competitive analysis
|
||||
|
||||
4. **Business Plan**
|
||||
- Sustainability model
|
||||
- Revenue potential
|
||||
- Growth projections
|
||||
- Partnership strategy
|
||||
|
||||
5. **Detailed Budget**
|
||||
- Personnel costs
|
||||
- Infrastructure costs
|
||||
- Marketing expenses
|
||||
- Contingency planning
|
||||
|
||||
### Strategic Grant Application (8+ hours)
|
||||
All Standard Grant requirements plus:
|
||||
- Financial projections (3 years)
|
||||
- Legal structure documentation
|
||||
- Advisory board information
|
||||
- Detailed milestone definitions
|
||||
- Risk mitigation strategies
|
||||
- Exit strategy
|
||||
|
||||
## Evaluation Criteria
|
||||
|
||||
### Micro-Grant Evaluation (48-hour decision)
|
||||
- **Technical Feasibility** (40%)
|
||||
- Clear implementation plan
|
||||
- Appropriate technology choices
|
||||
- Realistic timeline
|
||||
|
||||
- **Ecosystem Value** (35%)
|
||||
- Addresses real need
|
||||
- Potential user base
|
||||
- Community interest
|
||||
|
||||
- **Team Capability** (25%)
|
||||
- Relevant experience
|
||||
- Technical skills
|
||||
- Track record
|
||||
|
||||
### Standard Grant Evaluation (2-week process)
|
||||
- **Ecosystem Impact** (60%)
|
||||
- Network effects created
|
||||
- User adoption potential
|
||||
- Marketplace value
|
||||
- Strategic alignment
|
||||
|
||||
- **Technical Excellence** (40%)
|
||||
- Innovation level
|
||||
- Architecture quality
|
||||
- Security posture
|
||||
- Scalability design
|
||||
|
||||
### Strategic Grant Evaluation (4-week process)
|
||||
- **Strategic Value** (50%)
|
||||
- Long-term ecosystem impact
|
||||
- Market opportunity
|
||||
- Competitive advantage
|
||||
- Partnership potential
|
||||
|
||||
- **Execution Capability** (30%)
|
||||
- Team experience
|
||||
- Resource adequacy
|
||||
- Project management
|
||||
- Risk mitigation
|
||||
|
||||
- **Financial Viability** (20%)
|
||||
- Sustainability model
|
||||
- Revenue potential
|
||||
- Cost efficiency
|
||||
- Return on investment
|
||||
|
||||
## Milestone Management
|
||||
|
||||
### Standard Grant Milestones
|
||||
- **Milestone 1 (30%)**: Technical architecture complete
|
||||
- **Milestone 2 (30%)**: MVP functionality delivered
|
||||
- **Milestone 3 (40%)**: Production-ready with users
|
||||
|
||||
### Strategic Grant Milestones
|
||||
- **Phase 1**: Research and design (20%)
|
||||
- **Phase 2**: Prototype development (20%)
|
||||
- **Phase 3**: Beta testing (30%)
|
||||
- **Phase 4**: Production launch (30%)
|
||||
|
||||
### Milestone Review Process
|
||||
1. **Submission**: Milestone report with evidence
|
||||
2. **Review**: Technical and business evaluation
|
||||
3. **Decision**: Approved/needs revision/rejected
|
||||
4. **Payment**: Disbursement within 7 days
|
||||
|
||||
## Retroactive Grants
|
||||
|
||||
### Eligibility Criteria
|
||||
- Project shipped > 3 months ago
|
||||
- Active user base > 100 users
|
||||
- Open source with permissive license
|
||||
- Not previously funded by AITBC
|
||||
|
||||
### Application Requirements
|
||||
- Project metrics and analytics
|
||||
- User testimonials
|
||||
- Code quality assessment
|
||||
- Community engagement data
|
||||
|
||||
### Evaluation Factors
|
||||
- User adoption rate
|
||||
- Code quality
|
||||
- Documentation completeness
|
||||
- Community contributions
|
||||
- Innovation level
|
||||
|
||||
## Support Services
|
||||
|
||||
### Technical Support
|
||||
- **Office Hours**: Weekly 1-hour sessions
|
||||
- **Code Reviews**: Monthly deep dives
|
||||
- **Architecture Guidance**: On-demand consulting
|
||||
- **Security Audits**: Discounted professional audits
|
||||
|
||||
### Business Support
|
||||
- **Go-to-Market Strategy**: Marketing guidance
|
||||
- **Partnership Introductions**: Ecosystem connections
|
||||
- **Legal Support**: Compliance guidance
|
||||
- **Financial Advisory**: Sustainability planning
|
||||
|
||||
### Community Support
|
||||
- **Promotion**: Blog features, social media
|
||||
- **Showcases**: Conference presentations
|
||||
- **Networking**: Private Discord channel
|
||||
- **Alumni Program**: Ongoing engagement
|
||||
|
||||
## Compliance Requirements
|
||||
|
||||
### Legal Requirements
|
||||
- **KYC/AML**: Identity verification for $10k+
|
||||
- **Tax Forms**: W-9/W-8BEN for US entities
|
||||
- **Reporting**: Quarterly progress reports
|
||||
- **Audits**: Right to audit financial records
|
||||
|
||||
### Technical Requirements
|
||||
- **Open Source**: MIT/Apache 2.0 license
|
||||
- **Documentation**: Comprehensive user guides
|
||||
- **Testing**: Minimum 80% test coverage
|
||||
- **Security**: Security audit for $50k+
|
||||
|
||||
### Community Requirements
|
||||
- **Communication**: Regular updates
|
||||
- **Support**: User support channels
|
||||
- **Contributions**: Accept community PRs
|
||||
- **Recognition**: AITBC branding
|
||||
|
||||
## Funding Timeline
|
||||
|
||||
### Micro-Grants
|
||||
- **Application**: Anytime
|
||||
- **Review**: 48 hours
|
||||
- **Decision**: Immediate
|
||||
- **Payment**: Within 7 days
|
||||
|
||||
### Standard Grants
|
||||
- **Application**: Quarterly deadlines (Mar 1, Jun 1, Sep 1, Dec 1)
|
||||
- **Review**: 2 weeks
|
||||
- **Interview**: Week 3
|
||||
- **Decision**: Week 4
|
||||
- **Payment**: Within 14 days
|
||||
|
||||
### Strategic Grants
|
||||
- **Application**: By invitation only
|
||||
- **Review**: 4 weeks
|
||||
- **Due Diligence**: Week 5
|
||||
- **Decision**: Week 6
|
||||
- **Payment**: Within 21 days
|
||||
|
||||
## Success Metrics
|
||||
|
||||
### Project Success Metrics
|
||||
- **Technical Delivery**: On-time, on-budget completion
|
||||
- **User Adoption**: Active users, transaction volume
|
||||
- **Ecosystem Impact**: Network effects, integrations
|
||||
- **Sustainability**: Ongoing maintenance, community
|
||||
|
||||
### Program Success Metrics
|
||||
- **Application Quality**: Improvement over time
|
||||
- **Success Rate**: Projects achieving goals
|
||||
- **ROI**: Ecosystem value per dollar
|
||||
- **Diversity**: Geographic, demographic, technical
|
||||
|
||||
## Risk Management
|
||||
|
||||
### Common Risks
|
||||
1. **Project Failure**
|
||||
- Mitigation: Milestone-based funding
|
||||
- Recovery: Partial repayment, IP rights
|
||||
|
||||
2. **Scope Creep**
|
||||
- Mitigation: Clear milestone definitions
|
||||
- Recovery: Scope adjustment, additional funding
|
||||
|
||||
3. **Team Issues**
|
||||
- Mitigation: Team vetting, backup plans
|
||||
- Recovery: Team replacement, project transfer
|
||||
|
||||
4. **Market Changes**
|
||||
- Mitigation: Regular market analysis
|
||||
- Recovery: Pivot support, strategy adjustment
|
||||
|
||||
### Contingency Planning
|
||||
- **Reserve Fund**: 20% of annual budget
|
||||
- **Emergency Grants**: For critical ecosystem needs
|
||||
- **Project Rescue**: For failing high-value projects
|
||||
- **Legal Support**: For disputes and compliance
|
||||
|
||||
## Governance
|
||||
|
||||
### Grant Committee
|
||||
- **Composition**: 5-7 members
|
||||
- 2 AITBC Foundation representatives
|
||||
- 2 technical experts
|
||||
- 2 community representatives
|
||||
- 1 independent advisor
|
||||
|
||||
### Decision Process
|
||||
- **Micro-Grants**: Committee chair approval
|
||||
- **Standard Grants**: Majority vote
|
||||
- **Strategic Grants**: Supermajority (75%)
|
||||
- **Conflicts**: Recusal policy
|
||||
|
||||
### Transparency
|
||||
- **Public Registry**: All grants listed
|
||||
- **Progress Reports**: Quarterly updates
|
||||
- **Financial Reports**: Annual disclosure
|
||||
- **Decision Rationale**: Published when appropriate
|
||||
|
||||
## Application Templates
|
||||
|
||||
### Micro-Grant Template
|
||||
```markdown
|
||||
# Project Name
|
||||
## Team
|
||||
- Lead: [Name, GitHub, LinkedIn]
|
||||
- Members: [List]
|
||||
|
||||
## Problem
|
||||
[100 words describing the problem]
|
||||
|
||||
## Solution
|
||||
[200 words describing your solution]
|
||||
|
||||
## Implementation
|
||||
[100 words on how you'll build it]
|
||||
|
||||
## Timeline
|
||||
- Week 1: [Tasks]
|
||||
- Week 2: [Tasks]
|
||||
|
||||
## Budget
|
||||
- Development: $X
|
||||
- Infrastructure: $Y
|
||||
- Total: $Z
|
||||
```
|
||||
|
||||
### Standard Grant Template
|
||||
[Full template available in grants repository]
|
||||
|
||||
## Contact Information
|
||||
|
||||
- **Applications**: grants@aitbc.io
|
||||
- **Questions**: info@aitbc.io
|
||||
- **Technical Support**: support@aitbc.io
|
||||
- **Media**: media@aitbc.io
|
||||
|
||||
## FAQ
|
||||
|
||||
### Q: Can I apply for multiple grants?
|
||||
A: Yes, but only one active grant per team at a time.
|
||||
|
||||
### Q: Do I need to be a US entity?
|
||||
A: No, we fund globally. KYC required for $10k+.
|
||||
|
||||
### Q: Can grants be renewed?
|
||||
A: Yes, based on milestone completion and impact.
|
||||
|
||||
### Q: What happens to IP?
|
||||
A: Grantee retains IP, AITBC gets usage rights.
|
||||
|
||||
### Q: How is success measured?
|
||||
A: Through milestone completion and ecosystem metrics.
|
||||
|
||||
---
|
||||
|
||||
*Last updated: 2024-01-15*
|
||||
430
docs/ecosystem/hackathons/hackathon-framework.md
Normal file
430
docs/ecosystem/hackathons/hackathon-framework.md
Normal file
@ -0,0 +1,430 @@
|
||||
# AITBC Hackathon Organization Framework
|
||||
|
||||
## Overview
|
||||
|
||||
The AITBC Hackathon Program is designed to drive ecosystem growth by incentivizing developers to build valuable marketplace extensions and analytics tools. This framework provides guidelines for organizing successful hackathons that produce high-quality, production-ready contributions to the AITBC ecosystem.
|
||||
|
||||
## Hackathon Structure
|
||||
|
||||
### Event Format
|
||||
- **Duration**: 1 week (7 days)
|
||||
- **Format**: Hybrid (virtual with optional local meetups)
|
||||
- **Frequency**: Quarterly
|
||||
- **Participants**: 100-500 developers globally
|
||||
- **Team Size**: 1-5 members per team
|
||||
|
||||
### Schedule Template
|
||||
```
|
||||
Day 1 (Saturday): Kickoff & Team Formation
|
||||
- 10:00 UTC: Opening ceremony
|
||||
- 11:00 UTC: Theme announcement & challenge details
|
||||
- 12:00 UTC: Technical workshops
|
||||
- 14:00 UTC: Team formation & ideation
|
||||
- 16:00 UTC: Mentor office hours begin
|
||||
|
||||
Days 2-6: Development Period
|
||||
- Daily check-ins at 14:00 UTC
|
||||
- Continuous mentor availability
|
||||
- Technical workshops on Days 2 & 4
|
||||
- Progress reviews on Days 3 & 5
|
||||
|
||||
Day 7 (Friday): Judging & Awards
|
||||
- 10:00 UTC: Submission deadline
|
||||
- 12:00 UTC: Judging begins
|
||||
- 16:00 UTC: Live demos
|
||||
- 18:00 UTC: Awards ceremony
|
||||
```
|
||||
|
||||
## Theme Categories
|
||||
|
||||
### Rotating Quarterly Themes
|
||||
1. **DeFi on AITBC** (Q1)
|
||||
- Decentralized exchanges
|
||||
- Lending protocols
|
||||
- Yield aggregators
|
||||
- Derivatives platforms
|
||||
|
||||
2. **Enterprise Integration** (Q2)
|
||||
- ERP connectors
|
||||
- Payment processors
|
||||
- Analytics dashboards
|
||||
- Compliance tools
|
||||
|
||||
3. **Developer Experience** (Q3)
|
||||
- SDK improvements
|
||||
- Developer tools
|
||||
- Testing frameworks
|
||||
- Documentation platforms
|
||||
|
||||
4. **Cross-Chain Innovation** (Q4)
|
||||
- Bridge implementations
|
||||
- Cross-chain protocols
|
||||
- Interoperability tools
|
||||
- Multi-chain analytics
|
||||
|
||||
### Bounty Board Program
|
||||
Parallel to hackathons, maintain a standing bounty board for high-value extensions:
|
||||
- **SAP Connector**: $10,000 grant
|
||||
- **Oracle Integration**: $8,000 grant
|
||||
- **Advanced Analytics**: $5,000 grant
|
||||
- **Mobile SDK**: $7,000 grant
|
||||
|
||||
## Judging Criteria
|
||||
|
||||
### Scoring Breakdown
|
||||
- **Ecosystem Impact (40%)**
|
||||
- Network effects created
|
||||
- User adoption potential
|
||||
- Marketplace value
|
||||
- Community benefit
|
||||
|
||||
- **Technical Excellence (30%)**
|
||||
- Code quality and architecture
|
||||
- Security considerations
|
||||
- Performance optimization
|
||||
- Innovation in implementation
|
||||
|
||||
- **Innovation (20%)**
|
||||
- Novel approach to problem
|
||||
- Creative use of AITBC features
|
||||
- Originality of concept
|
||||
- Pushing boundaries
|
||||
|
||||
- **Usability (10%)**
|
||||
- User experience design
|
||||
- Documentation quality
|
||||
- Demo presentation
|
||||
- Accessibility
|
||||
|
||||
### Judging Process
|
||||
1. **Initial Screening** (Day 7, 12:00-14:00 UTC)
|
||||
- Eligibility verification
|
||||
- Basic functionality check
|
||||
- Documentation review
|
||||
|
||||
2. **Technical Review** (Day 7, 14:00-16:00 UTC)
|
||||
- Code deep dive
|
||||
- Architecture assessment
|
||||
- Security evaluation
|
||||
|
||||
3. **Live Demos** (Day 7, 16:00-18:00 UTC)
|
||||
- 5-minute presentations
|
||||
- Q&A with judges
|
||||
- Real-time demonstration
|
||||
|
||||
4. **Deliberation** (Day 7, 18:00-19:00 UTC)
|
||||
- Judge consensus building
|
||||
- Final scoring
|
||||
- Award decisions
|
||||
|
||||
## Prize Structure
|
||||
|
||||
### Tiered Prizes
|
||||
1. **First Place** (1 team)
|
||||
- $25,000 cash grant
|
||||
- $25,000 deployment grant
|
||||
- 6-month mentorship program
|
||||
- Featured in AITBC marketplace
|
||||
- Speaking opportunity at next conference
|
||||
|
||||
2. **Second Place** (2 teams)
|
||||
- $15,000 cash grant
|
||||
- $15,000 deployment grant
|
||||
- 3-month mentorship
|
||||
- Marketplace promotion
|
||||
|
||||
3. **Third Place** (3 teams)
|
||||
- $10,000 cash grant
|
||||
- $10,000 deployment grant
|
||||
- 1-month mentorship
|
||||
- Documentation support
|
||||
|
||||
4. **Category Winners** (1 per category)
|
||||
- $5,000 cash grant
|
||||
- Deployment support
|
||||
- Ecosystem promotion
|
||||
|
||||
5. **Honorable Mentions** (5 teams)
|
||||
- $2,500 cash grant
|
||||
- Code review support
|
||||
- Community recognition
|
||||
|
||||
### Special Prizes
|
||||
- **Best Security Implementation**: $5,000
|
||||
- **Best User Experience**: $5,000
|
||||
- **Most Innovative**: $5,000
|
||||
- **Best Documentation**: $3,000
|
||||
- **Community Choice**: $3,000
|
||||
|
||||
## Starter Kits and Templates
|
||||
|
||||
### Provided Resources
|
||||
1. **Python SDK Starter Kit**
|
||||
- Pre-configured development environment
|
||||
- Sample connector implementation
|
||||
- Testing framework setup
|
||||
- Documentation template
|
||||
|
||||
2. **Frontend Templates**
|
||||
- React dashboard template
|
||||
- Vue.js analytics interface
|
||||
- Mobile app skeleton
|
||||
- Design system components
|
||||
|
||||
3. **Infrastructure Templates**
|
||||
- Docker compose files
|
||||
- Kubernetes manifests
|
||||
- CI/CD pipelines
|
||||
- Monitoring setup
|
||||
|
||||
4. **Integration Examples**
|
||||
- Stripe connector reference
|
||||
- PostgreSQL adapter
|
||||
- Redis cache layer
|
||||
- WebSocket examples
|
||||
|
||||
### Customization Bonus
|
||||
Teams that significantly innovate beyond templates receive bonus points:
|
||||
- +5 points for novel architecture
|
||||
- +5 points for unique feature implementation
|
||||
- +5 points for creative problem solving
|
||||
|
||||
## Support Infrastructure
|
||||
|
||||
### Technical Support
|
||||
- **24/7 Discord Help**: Technical questions answered
|
||||
- **Daily Office Hours**: 2-hour sessions with core developers
|
||||
- **Code Review Service**: Free professional code reviews
|
||||
- **Infrastructure Credits**: Free hosting during hackathon
|
||||
|
||||
### Mentorship Program
|
||||
- **Technical Mentors**: Core team members and senior developers
|
||||
- **Business Mentors**: Product managers and ecosystem leads
|
||||
- **Design Mentors**: UX/UI experts
|
||||
- **Domain Experts**: Industry specialists per theme
|
||||
|
||||
### Communication Channels
|
||||
- **Main Discord**: #hackathon channel
|
||||
- **Voice Channels**: Team collaboration rooms
|
||||
- **GitHub Discussions**: Q&A and announcements
|
||||
- **Email Updates**: Daily summaries and reminders
|
||||
|
||||
## Submission Requirements
|
||||
|
||||
### Mandatory Deliverables
|
||||
1. **Source Code**
|
||||
- Public GitHub repository
|
||||
- README with setup instructions
|
||||
- MIT or Apache 2.0 license
|
||||
- Contributing guidelines
|
||||
|
||||
2. **Documentation**
|
||||
- Technical architecture document
|
||||
- API documentation (if applicable)
|
||||
- User guide
|
||||
- Deployment instructions
|
||||
|
||||
3. **Demo**
|
||||
- 5-minute video demonstration
|
||||
- Live demo environment
|
||||
- Test credentials (if applicable)
|
||||
- Feature walkthrough
|
||||
|
||||
4. **Presentation**
|
||||
- 5-slide deck (problem, solution, tech stack, impact, future)
|
||||
- Demo script
|
||||
- Team information
|
||||
- Contact details
|
||||
|
||||
### Evaluation Criteria
|
||||
- **Functionality**: Does it work as described?
|
||||
- **Completeness**: Are all features implemented?
|
||||
- **Quality**: Is the code production-ready?
|
||||
- **Innovation**: Does it bring new value?
|
||||
- **Impact**: Will it benefit the ecosystem?
|
||||
|
||||
## Post-Hackathon Support
|
||||
|
||||
### Winner Support Package
|
||||
1. **Technical Support**
|
||||
- Dedicated Slack channel
|
||||
- Weekly check-ins with core team
|
||||
- Code review and optimization
|
||||
- Security audit assistance
|
||||
|
||||
2. **Business Support**
|
||||
- Go-to-market strategy
|
||||
- Partnership introductions
|
||||
- Marketing promotion
|
||||
- User acquisition support
|
||||
|
||||
3. **Infrastructure Support**
|
||||
- Free hosting for 6 months
|
||||
- Monitoring and analytics
|
||||
- Backup and disaster recovery
|
||||
- Scaling guidance
|
||||
|
||||
### Ecosystem Integration
|
||||
- **Marketplace Listing**: Featured placement
|
||||
- **Documentation**: Official integration guide
|
||||
- **Blog Feature**: Success story article
|
||||
- **Conference Talk**: Presentation opportunity
|
||||
|
||||
## Organizational Guidelines
|
||||
|
||||
### Team Composition
|
||||
- **Organizing Team**: 5-7 people
|
||||
- Lead organizer (project management)
|
||||
- Technical lead (developer support)
|
||||
- Community manager (communication)
|
||||
- Judge coordinator (judging process)
|
||||
- Sponsor liaison (partnerships)
|
||||
- Marketing lead (promotion)
|
||||
- Logistics coordinator (operations)
|
||||
|
||||
### Timeline Planning
|
||||
- **12 Weeks Out**: Theme selection, judge recruitment
|
||||
- **8 Weeks Out**: Sponsor outreach, venue planning
|
||||
- **6 Weeks Out**: Marketing launch, registration opens
|
||||
- **4 Weeks Out**: Mentor recruitment, kit preparation
|
||||
- **2 Weeks Out**: Final confirmations, testing
|
||||
- **1 Week Out**: Final preparations, dry run
|
||||
|
||||
### Budget Considerations
|
||||
- **Prize Pool**: $100,000 - $200,000
|
||||
- **Platform Costs**: $10,000 - $20,000
|
||||
- **Marketing**: $15,000 - $30,000
|
||||
- **Operations**: $20,000 - $40,000
|
||||
- **Contingency**: 15% of total
|
||||
|
||||
## Success Metrics
|
||||
|
||||
### Quantitative Metrics
|
||||
- Number of participants
|
||||
- Number of submissions
|
||||
- Code quality scores
|
||||
- Post-hackathon deployment rate
|
||||
- User adoption of winning projects
|
||||
|
||||
### Qualitative Metrics
|
||||
- Participant satisfaction
|
||||
- Community engagement
|
||||
- Innovation level
|
||||
- Ecosystem impact
|
||||
- Brand awareness
|
||||
|
||||
### Long-term Tracking
|
||||
- 6-month project survival rate
|
||||
- Integration into core ecosystem
|
||||
- Revenue generated for winners
|
||||
- Network effects created
|
||||
- Community growth attribution
|
||||
|
||||
## Risk Management
|
||||
|
||||
### Common Risks
|
||||
1. **Low Participation**
|
||||
- Mitigation: Early marketing, partner promotion
|
||||
- Backup: Extend registration, increase prizes
|
||||
|
||||
2. **Poor Quality Submissions**
|
||||
- Mitigation: Better guidelines, mentor support
|
||||
- Backup: Pre-screening, workshop focus
|
||||
|
||||
3. **Technical Issues**
|
||||
- Mitigation: Platform testing, backup systems
|
||||
- Backup: Manual processes, extended deadlines
|
||||
|
||||
4. **Judge Availability**
|
||||
- Mitigation: Early booking, backup judges
|
||||
- Backup: Virtual judging, async review
|
||||
|
||||
### Contingency Plans
|
||||
- **Platform Failure**: Switch to GitHub + Discord
|
||||
- **Low Turnout**: Merge with next event
|
||||
- **Sponsor Withdrawal**: Use foundation funds
|
||||
- **Security Issues**: Pause and investigate
|
||||
|
||||
## Sponsorship Framework
|
||||
|
||||
### Sponsorship Tiers
|
||||
1. **Platinum** ($50,000)
|
||||
- Title sponsorship
|
||||
- Judge selection
|
||||
- Branding everywhere
|
||||
- Speaking opportunities
|
||||
|
||||
2. **Gold** ($25,000)
|
||||
- Category sponsorship
|
||||
- Mentor participation
|
||||
- Logo placement
|
||||
- Workshop hosting
|
||||
|
||||
3. **Silver** ($10,000)
|
||||
- Brand recognition
|
||||
- Recruiting access
|
||||
- Demo booth
|
||||
- Newsletter feature
|
||||
|
||||
4. **Bronze** ($5,000)
|
||||
- Logo on website
|
||||
- Social media mention
|
||||
- Participant swag
|
||||
- Job board access
|
||||
|
||||
### Sponsor Benefits
|
||||
- **Talent Acquisition**: Access to top developers
|
||||
- **Brand Exposure**: Global reach
|
||||
- **Innovation Pipeline**: Early access to new tech
|
||||
- **Community Goodwill**: Supporting ecosystem
|
||||
|
||||
## Legal and Compliance
|
||||
|
||||
### Terms and Conditions
|
||||
- IP ownership clarification
|
||||
- Code licensing requirements
|
||||
- Privacy policy compliance
|
||||
- International considerations
|
||||
|
||||
### Data Protection
|
||||
- GDPR compliance for EU participants
|
||||
- Data storage and processing
|
||||
- Consent management
|
||||
- Right to deletion
|
||||
|
||||
### Accessibility
|
||||
- Platform accessibility standards
|
||||
- Accommodation requests
|
||||
- Inclusive language guidelines
|
||||
- Time zone considerations
|
||||
|
||||
## Continuous Improvement
|
||||
|
||||
### Feedback Collection
|
||||
- Participant surveys
|
||||
- Mentor feedback
|
||||
- Judge insights
|
||||
- Sponsor evaluations
|
||||
|
||||
### Process Optimization
|
||||
- Quarterly review meetings
|
||||
- A/B testing formats
|
||||
- Technology updates
|
||||
- Best practice documentation
|
||||
|
||||
### Community Building
|
||||
- Alumni network
|
||||
- Ongoing engagement
|
||||
- Success stories
|
||||
- Knowledge sharing
|
||||
|
||||
## Contact Information
|
||||
|
||||
- **Hackathon Inquiries**: hackathons@aitbc.io
|
||||
- **Sponsorship**: sponsors@aitbc.io
|
||||
- **Technical Support**: support@aitbc.io
|
||||
- **Media**: media@aitbc.io
|
||||
|
||||
---
|
||||
|
||||
*Last updated: 2024-01-15*
|
||||
49
docs/ecosystem/index.md
Normal file
49
docs/ecosystem/index.md
Normal file
@ -0,0 +1,49 @@
|
||||
# AITBC Ecosystem Documentation
|
||||
|
||||
Welcome to the AITBC ecosystem documentation. This section contains resources for participating in and contributing to the AITBC ecosystem.
|
||||
|
||||
## Community & Governance
|
||||
|
||||
- [RFC Process](rfc-process.md) - Request for Comments process
|
||||
- [Governance Framework](governance/) - Community governance structure
|
||||
- [Community Calls](governance/calls.md) - Join our community calls
|
||||
|
||||
## Hackathons
|
||||
|
||||
- [Hackathon Framework](hackathons/hackathon-framework.md) - Complete guide to AITBC hackathons
|
||||
- [Participation Guide](hackathons/participate.md) - How to participate
|
||||
- [Organizer Guide](hackathons/organize.md) - How to organize a hackathon
|
||||
- [Past Events](hackathons/past.md) - Previous hackathon results
|
||||
|
||||
## Grants Program
|
||||
|
||||
- [Grant Program](grants/grant-program.md) - Overview of the grant program
|
||||
- [Apply for Grants](grants/apply.md) - How to apply for funding
|
||||
- [Grant Guidelines](grants/guidelines.md) - Grant requirements and expectations
|
||||
- [Success Stories](grants/stories.md) - Successful grant projects
|
||||
|
||||
## Certification Program
|
||||
|
||||
- [Certification Overview](certification/ecosystem-certification-criteria.md) - Certification criteria
|
||||
- [Get Certified](certification/apply.md) - How to get your solution certified
|
||||
- [Certified Partners](certification/partners.md) - List of certified partners
|
||||
- [Public Registry](certification/registry.md) - Public certification registry
|
||||
|
||||
## Developer Resources
|
||||
|
||||
- [Extension SDK](../ecosystem-extensions/) - Build marketplace extensions
|
||||
- [Analytics Tools](../ecosystem-analytics/) - Ecosystem analytics
|
||||
- [Documentation](../developer/) - Developer documentation
|
||||
|
||||
## Community
|
||||
|
||||
- [Discord](https://discord.gg/aitbc) - Join our Discord community
|
||||
- [GitHub](https://github.com/aitbc) - Contribute on GitHub
|
||||
- [Blog](https://blog.aitbc.io) - Latest news and updates
|
||||
- [Newsletter](https://aitbc.io/newsletter) - Subscribe to our newsletter
|
||||
|
||||
## Support
|
||||
|
||||
- [Contact Us](../user-guide/support.md) - Get in touch
|
||||
- [FAQ](../user-guide/faq.md) - Frequently asked questions
|
||||
- [Help](../user-guide/help.md) - How to get help
|
||||
340
docs/ecosystem/rfc-process.md
Normal file
340
docs/ecosystem/rfc-process.md
Normal file
@ -0,0 +1,340 @@
|
||||
# AITBC Request for Comments (RFC) Process
|
||||
|
||||
## Overview
|
||||
|
||||
The RFC (Request for Comments) process is the primary mechanism for proposing and discussing major changes to the AITBC protocol, ecosystem, and governance. This process ensures transparency, community involvement, and thorough technical review before significant changes are implemented.
|
||||
|
||||
## Process Stages
|
||||
|
||||
### 1. Idea Discussion (Pre-RFC)
|
||||
- Open issue on GitHub with `idea:` prefix
|
||||
- Community discussion in GitHub issue
|
||||
- No formal process required
|
||||
- Purpose: Gauge interest and gather early feedback
|
||||
|
||||
### 2. RFC Draft
|
||||
- Create RFC document using template
|
||||
- Submit Pull Request to `rfcs` repository
|
||||
- PR labeled `rfc-draft`
|
||||
- Community review period: 2 weeks minimum
|
||||
|
||||
### 3. RFC Review
|
||||
- Core team assigns reviewers
|
||||
- Technical review by subject matter experts
|
||||
- Community feedback incorporated
|
||||
- PR labeled `rfc-review`
|
||||
|
||||
### 4. Final Comment Period (FCP)
|
||||
- RFC marked as `final-comment-period`
|
||||
- 10 day waiting period for final objections
|
||||
- All substantive objections must be addressed
|
||||
- PR labeled `fcp`
|
||||
|
||||
### 5. Acceptance or Rejection
|
||||
- After FCP, RFC is either:
|
||||
- **Accepted**: Implementation begins
|
||||
- **Rejected**: Document archived with reasoning
|
||||
- **Deferred**: Returned to draft for revisions
|
||||
|
||||
### 6. Implementation
|
||||
- Accepted RFCs enter implementation queue
|
||||
- Implementation tracked in project board
|
||||
- Progress updates in RFC comments
|
||||
- Completion marked in RFC status
|
||||
|
||||
## RFC Categories
|
||||
|
||||
### Protocol (P)
|
||||
- Core protocol changes
|
||||
- Consensus modifications
|
||||
- Cryptographic updates
|
||||
- Cross-chain improvements
|
||||
|
||||
### API (A)
|
||||
- REST API changes
|
||||
- SDK specifications
|
||||
- WebSocket protocols
|
||||
- Integration interfaces
|
||||
|
||||
### Ecosystem (E)
|
||||
- Marketplace standards
|
||||
- Connector specifications
|
||||
- Certification requirements
|
||||
- Developer tools
|
||||
|
||||
### Governance (G)
|
||||
- Process changes
|
||||
- Election procedures
|
||||
- Foundation policies
|
||||
- Community guidelines
|
||||
|
||||
### Network (N)
|
||||
- Node operations
|
||||
- Staking requirements
|
||||
- Validator specifications
|
||||
- Network parameters
|
||||
|
||||
## RFC Template
|
||||
|
||||
```markdown
|
||||
# RFC XXX: [Title]
|
||||
|
||||
- **Start Date**: YYYY-MM-DD
|
||||
- **RFC PR**: [link to PR]
|
||||
- **Authors**: [@username1, @username2]
|
||||
- **Status**: Draft | Review | FCP | Accepted | Rejected | Deferred
|
||||
- **Category**: [P|A|E|G|N]
|
||||
|
||||
## Summary
|
||||
|
||||
[One-paragraph summary of the proposal]
|
||||
|
||||
## Motivation
|
||||
|
||||
[Why is this change needed? What problem does it solve?]
|
||||
|
||||
## Detailed Design
|
||||
|
||||
[Technical specifications, implementation details]
|
||||
|
||||
## Rationale and Alternatives
|
||||
|
||||
[Why this approach over alternatives?]
|
||||
|
||||
## Impact
|
||||
|
||||
[Effects on existing systems, migration requirements]
|
||||
|
||||
## Security Considerations
|
||||
|
||||
[Security implications and mitigations]
|
||||
|
||||
## Testing Strategy
|
||||
|
||||
[How will this be tested?]
|
||||
|
||||
## Unresolved Questions
|
||||
|
||||
[Open issues to be resolved]
|
||||
|
||||
## Implementation Plan
|
||||
|
||||
[Timeline and milestones]
|
||||
```
|
||||
|
||||
## Submission Guidelines
|
||||
|
||||
### Before Submitting
|
||||
1. Search existing RFCs to avoid duplicates
|
||||
2. Discuss idea in GitHub issue first
|
||||
3. Get feedback from community
|
||||
4. Address obvious concerns early
|
||||
|
||||
### Required Elements
|
||||
- Complete RFC template
|
||||
- Clear problem statement
|
||||
- Detailed technical specification
|
||||
- Security analysis
|
||||
- Implementation plan
|
||||
|
||||
### Formatting
|
||||
- Use Markdown with proper headings
|
||||
- Include diagrams where helpful
|
||||
- Link to relevant issues/PRs
|
||||
- Keep RFC focused and concise
|
||||
|
||||
## Review Process
|
||||
|
||||
### Reviewer Roles
|
||||
- **Technical Reviewer**: Validates technical correctness
|
||||
- **Security Reviewer**: Assesses security implications
|
||||
- **Community Reviewer**: Ensures ecosystem impact considered
|
||||
- **Core Team**: Final decision authority
|
||||
|
||||
### Review Criteria
|
||||
- Technical soundness
|
||||
- Security implications
|
||||
- Ecosystem impact
|
||||
- Implementation feasibility
|
||||
- Community consensus
|
||||
|
||||
### Timeline
|
||||
- Initial review: 2 weeks
|
||||
- Address feedback: 1-2 weeks
|
||||
- FCP: 10 days
|
||||
- Total: 3-5 weeks typical
|
||||
|
||||
## Decision Making
|
||||
|
||||
### Benevolent Dictator Model (Current)
|
||||
- AITBC Foundation has final say
|
||||
- Veto power for critical decisions
|
||||
- Explicit veto reasons required
|
||||
- Community feedback strongly considered
|
||||
|
||||
### Transition Plan
|
||||
- After 100 RFCs or 2 years: Review governance model
|
||||
- Consider delegate voting system
|
||||
- Gradual decentralization
|
||||
- Community vote on transition
|
||||
|
||||
### Appeal Process
|
||||
- RFC authors can appeal rejection
|
||||
- Appeal reviewed by expanded committee
|
||||
- Final decision documented
|
||||
- Process improvement considered
|
||||
|
||||
## RFC Repository Structure
|
||||
|
||||
```
|
||||
rfcs/
|
||||
├── 0000-template.md
|
||||
├── 0001-example.md
|
||||
├── text/
|
||||
│ ├── 0000-template.md
|
||||
│ ├── 0001-example.md
|
||||
│ └── ...
|
||||
├── accepted/
|
||||
│ ├── 0001-example.md
|
||||
│ └── ...
|
||||
├── rejected/
|
||||
│ └── ...
|
||||
└── README.md
|
||||
```
|
||||
|
||||
## RFC Status Tracking
|
||||
|
||||
### Status Labels
|
||||
- `rfc-draft`: Initial submission
|
||||
- `rfc-review`: Under review
|
||||
- `rfc-fcp`: Final comment period
|
||||
- `rfc-accepted`: Approved for implementation
|
||||
- `rfc-rejected`: Not approved
|
||||
- `rfc-implemented`: Complete
|
||||
- `rfc-deferred`: Returned to draft
|
||||
|
||||
### RFC Numbers
|
||||
- Sequential numbering from 0001
|
||||
- Reserved ranges for special cases
|
||||
- PR numbers may differ from RFC numbers
|
||||
|
||||
## Community Participation
|
||||
|
||||
### How to Participate
|
||||
1. Review draft RFCs
|
||||
2. Comment with constructive feedback
|
||||
3. Submit implementation proposals
|
||||
4. Join community discussions
|
||||
5. Vote in governance decisions
|
||||
|
||||
### Expectations
|
||||
- Professional and respectful discourse
|
||||
- Technical arguments over opinions
|
||||
- Consider ecosystem impact
|
||||
- Help newcomers understand
|
||||
|
||||
### Recognition
|
||||
- Contributors acknowledged in RFC
|
||||
- Implementation credit in releases
|
||||
- Community appreciation in governance
|
||||
|
||||
## Implementation Tracking
|
||||
|
||||
### Implementation Board
|
||||
- GitHub Project board tracks RFCs
|
||||
- Columns: Proposed, In Review, FCP, Accepted, In Progress, Complete
|
||||
- Assignees and timelines visible
|
||||
- Dependencies and blockers noted
|
||||
|
||||
### Progress Updates
|
||||
- Weekly updates in RFC comments
|
||||
- Milestone completion notifications
|
||||
- Blocker escalation process
|
||||
- Completion celebration
|
||||
|
||||
## Special Cases
|
||||
|
||||
### Emergency RFCs
|
||||
- Security vulnerabilities
|
||||
- Critical bugs
|
||||
- Network threats
|
||||
- Accelerated process: 48-hour review
|
||||
|
||||
### Informational RFCs
|
||||
- Design documents
|
||||
- Best practices
|
||||
- Architecture decisions
|
||||
- No implementation required
|
||||
|
||||
### Withdrawn RFCs
|
||||
- Author may withdraw anytime
|
||||
- Reason documented
|
||||
- Learning preserved
|
||||
- Resubmission allowed
|
||||
|
||||
## Tools and Automation
|
||||
|
||||
### GitHub Automation
|
||||
- PR templates for RFCs
|
||||
- Label management
|
||||
- Reviewer assignment
|
||||
- Status tracking
|
||||
|
||||
### CI/CD Integration
|
||||
- RFC format validation
|
||||
- Link checking
|
||||
- Diagram rendering
|
||||
- PDF generation
|
||||
|
||||
### Analytics
|
||||
- RFC submission rate
|
||||
- Review time metrics
|
||||
- Community participation
|
||||
- Implementation success
|
||||
|
||||
## Historical Context
|
||||
|
||||
### Inspiration
|
||||
- Rust RFC process
|
||||
- Ethereum EIP process
|
||||
- IETF standards process
|
||||
- Apache governance
|
||||
|
||||
### Evolution
|
||||
- Process improvements via RFCs
|
||||
- Community feedback incorporation
|
||||
- Governance transitions
|
||||
- Lessons learned
|
||||
|
||||
## Contact Information
|
||||
|
||||
- **RFC Repository**: https://github.com/aitbc/rfcs
|
||||
- **Discussions**: https://github.com/aitbc/rfcs/discussions
|
||||
- **Governance**: governance@aitbc.io
|
||||
- **Process Issues**: Use GitHub issues in rfcs repo
|
||||
|
||||
## FAQ
|
||||
|
||||
### Q: Who can submit an RFC?
|
||||
A: Anyone in the community can submit RFCs.
|
||||
|
||||
### Q: How long does the process take?
|
||||
A: Typically 3-5 weeks from draft to decision.
|
||||
|
||||
### Q: Can RFCs be rejected?
|
||||
A: Yes, RFCs can be rejected with clear reasoning.
|
||||
|
||||
### Q: What happens after acceptance?
|
||||
A: RFC enters implementation queue with timeline.
|
||||
|
||||
### Q: How is governance decided?
|
||||
A: Currently benevolent dictator model, transitioning to community governance.
|
||||
|
||||
### Q: Can I implement before acceptance?
|
||||
A: No, wait for RFC acceptance to avoid wasted effort.
|
||||
|
||||
### Q: How are conflicts resolved?
|
||||
A: Through discussion, mediation, and Foundation decision if needed.
|
||||
|
||||
### Q: Where can I ask questions?
|
||||
A: GitHub discussions, Discord, or governance email.
|
||||
@ -1,19 +1,20 @@
|
||||
# Explorer Web – Task Breakdown
|
||||
|
||||
## Status (2025-09-28)
|
||||
## Status (2025-12-22)
|
||||
|
||||
- **Stage 1**: Overview page renders block/transaction/receipt summaries from mock data with empty-state fallbacks. Remaining work focuses on blocks/transactions detail UIs, responsive polish, and live data toggle validation.
|
||||
- **Stage 1**: ✅ Completed - All pages implemented with mock data integration, responsive design, and live data toggle.
|
||||
- **Stage 2**: ✅ Completed - Live mode validated against coordinator endpoints with Playwright e2e tests.
|
||||
|
||||
## Stage 1 (MVP)
|
||||
## Stage 1 (MVP) - Completed
|
||||
|
||||
- **Structure & Assets**
|
||||
- ⏳ Populate `apps/explorer-web/public/` with `index.html`, `block.html`, `tx.html`, `address.html`, `receipts.html`, `404.html` scaffolds.
|
||||
- ✅ Populate `apps/explorer-web/public/` with `index.html` and all page scaffolds.
|
||||
- ✅ Add base stylesheets (`public/css/base.css`, `public/css/layout.css`, `public/css/theme.css`).
|
||||
- ⏳ Include logo and icon assets under `public/assets/`.
|
||||
- ✅ Include logo and icon assets under `public/assets/`.
|
||||
|
||||
- **TypeScript Modules**
|
||||
- ✅ Provide configuration and data helpers (`src/config.ts`, `src/lib/mockData.ts`, `src/lib/models.ts`).
|
||||
- ⏳ Add shared store/utilities module for cross-page state.
|
||||
- ✅ Add shared store/utilities module for cross-page state.
|
||||
- ✅ Implement core page controllers and components under `src/pages/` and `src/components/` (overview, blocks, transactions, addresses, receipts, header/footer, data mode toggle).
|
||||
|
||||
- **Mock Data**
|
||||
@ -21,9 +22,14 @@
|
||||
- ✅ Enable mock/live mode toggle via `getDataMode()` and `<data-mode-toggle>` components.
|
||||
|
||||
- **Interaction & UX**
|
||||
- ⏳ Implement search box detection for block numbers, hashes, and addresses.
|
||||
- ⏳ Add pagination or infinite scroll for block and transaction tables.
|
||||
- ⏳ Expand responsive polish beyond overview cards (tablet/mobile grid, table hover states).
|
||||
- ✅ Implement search box detection for block numbers, hashes, and addresses.
|
||||
- ✅ Add pagination or infinite scroll for block and transaction tables.
|
||||
- ✅ Expand responsive polish beyond overview cards (tablet/mobile grid, table hover states).
|
||||
|
||||
- **Live Mode Integration**
|
||||
- ✅ Hit live coordinator endpoints (`/v1/blocks`, `/v1/transactions`, `/v1/addresses`, `/v1/receipts`) via `getDataMode() === "live"`.
|
||||
- ✅ Add fallbacks + error surfacing for partial/failed live responses.
|
||||
- ✅ Implement Playwright e2e tests for live mode functionality.
|
||||
|
||||
- **Documentation**
|
||||
- ✅ Update `apps/explorer-web/README.md` with build/run instructions and API assumptions.
|
||||
|
||||
@ -1,38 +1,43 @@
|
||||
# Marketplace Web – Task Breakdown
|
||||
|
||||
## Status (2025-09-27)
|
||||
## Status (2025-12-22)
|
||||
|
||||
- **Stage 1**: Frontend scaffolding pending. Awaiting API definitions from coordinator/pool hub before wiring mock vs real data sources.
|
||||
- **Stage 1**: ✅ Completed - Vite + TypeScript project initialized with API layer, auth scaffolding, and mock/live data toggle.
|
||||
- **Stage 2**: ✅ Completed - Connected to coordinator endpoints with feature flags for live mode rollout.
|
||||
|
||||
## Stage 1 (MVP)
|
||||
## Stage 1 (MVP) - Completed
|
||||
|
||||
- **Project Initialization**
|
||||
- Scaffold Vite + TypeScript project under `apps/marketplace-web/`.
|
||||
- Define `package.json`, `tsconfig.json`, `vite.config.ts`, and `.env.example` with `VITE_API_BASE`, `VITE_FEATURE_WALLET`.
|
||||
- Configure ESLint/Prettier presets if desired.
|
||||
- ✅ Scaffold Vite + TypeScript project under `apps/marketplace-web/`.
|
||||
- ✅ Define `package.json`, `tsconfig.json`, `vite.config.ts`, and `.env.example` with `VITE_API_BASE`, `VITE_FEATURE_WALLET`.
|
||||
- ✅ Configure ESLint/Prettier presets.
|
||||
|
||||
- **API Layer**
|
||||
- Implement `src/api/http.ts` for base fetch wrapper with mock vs real toggle.
|
||||
- Create `src/api/marketplace.ts` with typed functions for offers, bids, stats, wallet.
|
||||
- Provide mock JSON files under `public/.mock/` for development.
|
||||
- ✅ Implement `src/api/http.ts` for base fetch wrapper with mock vs real toggle.
|
||||
- ✅ Create `src/api/marketplace.ts` with typed functions for offers, bids, stats, wallet.
|
||||
- ✅ Provide mock JSON files under `public/mock/` for development.
|
||||
|
||||
- **State Management**
|
||||
- Implement lightweight store in `src/store/state.ts` with pub/sub and caching.
|
||||
- Define shared TypeScript interfaces in `src/store/types.ts` per bootstrap doc.
|
||||
- ✅ Implement lightweight store in `src/lib/api.ts` with pub/sub and caching.
|
||||
- ✅ Define shared TypeScript interfaces in `src/lib/types.ts`.
|
||||
|
||||
- **Views & Components**
|
||||
- Build router in `src/router.ts` and bootstrap in `src/app.ts`.
|
||||
- Implement views: `HomeView`, `OfferDetailView`, `BidsView`, `StatsView`, `WalletView`.
|
||||
- Create components: `OfferCard`, `BidForm`, `Table`, `Sparkline`, `Toast` with validation and responsive design.
|
||||
- Add filters (region, hardware, price, latency) on home view.
|
||||
- ✅ Build router in `src/main.ts` and bootstrap application.
|
||||
- ✅ Implement views: offer list, bid form, stats cards.
|
||||
- ✅ Create components with validation and responsive design.
|
||||
- ✅ Add filters (region, hardware, price, latency).
|
||||
|
||||
- **Styling & UX**
|
||||
- Create CSS files (`styles/base.css`, `styles/layout.css`, `styles/components.css`) implementing dark theme and 960px layout.
|
||||
- Ensure accessibility: semantic HTML, focus states, keyboard navigation.
|
||||
- Add toast notifications and form validation messaging.
|
||||
- ✅ Create CSS system implementing design and responsive layout.
|
||||
- ✅ Ensure accessibility: semantic HTML, focus states, keyboard navigation.
|
||||
- ✅ Add toast notifications and form validation messaging.
|
||||
|
||||
- **Authentication**
|
||||
- ✅ Implement auth/session scaffolding in `src/lib/auth.ts`.
|
||||
- ✅ Add feature flags for marketplace actions.
|
||||
|
||||
- **Documentation**
|
||||
- Update `apps/marketplace-web/README.md` with instructions for dev/build, mock API usage, and configuration.
|
||||
- ✅ Update `apps/marketplace-web/README.md` with instructions for dev/build, mock API usage, and configuration.
|
||||
|
||||
## Stage 2+
|
||||
|
||||
|
||||
197
docs/mkdocs.yml
Normal file
197
docs/mkdocs.yml
Normal file
@ -0,0 +1,197 @@
|
||||
site_name: AITBC Documentation
|
||||
site_description: AI Trusted Blockchain Computing Platform Documentation
|
||||
site_author: AITBC Team
|
||||
site_url: https://docs.aitbc.io
|
||||
|
||||
# Repository
|
||||
repo_name: aitbc/docs
|
||||
repo_url: https://github.com/aitbc/docs
|
||||
edit_uri: edit/main/docs/
|
||||
|
||||
# Copyright
|
||||
copyright: Copyright © 2024 AITBC Team
|
||||
|
||||
# Configuration
|
||||
theme:
|
||||
name: material
|
||||
language: en
|
||||
features:
|
||||
- announce.dismiss
|
||||
- content.action.edit
|
||||
- content.action.view
|
||||
- content.code.annotate
|
||||
- content.code.copy
|
||||
- content.tabs.link
|
||||
- content.tooltips
|
||||
- header.autohide
|
||||
- navigation.expand
|
||||
- navigation.footer
|
||||
- navigation.indexes
|
||||
- navigation.instant
|
||||
- navigation.instant.prefetch
|
||||
- navigation.instant.progress
|
||||
- navigation.instant.scroll
|
||||
- navigation.prune
|
||||
- navigation.sections
|
||||
- navigation.tabs
|
||||
- navigation.tabs.sticky
|
||||
- navigation.top
|
||||
- navigation.tracking
|
||||
- search.highlight
|
||||
- search.share
|
||||
- search.suggest
|
||||
- toc.follow
|
||||
- toc.integrate
|
||||
palette:
|
||||
- scheme: default
|
||||
primary: blue
|
||||
accent: blue
|
||||
toggle:
|
||||
icon: material/brightness-7
|
||||
name: Switch to dark mode
|
||||
- scheme: slate
|
||||
primary: blue
|
||||
accent: blue
|
||||
toggle:
|
||||
icon: material/brightness-4
|
||||
name: Switch to light mode
|
||||
font:
|
||||
text: Roboto
|
||||
code: Roboto Mono
|
||||
favicon: assets/favicon.png
|
||||
logo: assets/logo.png
|
||||
|
||||
# Plugins
|
||||
plugins:
|
||||
- search:
|
||||
separator: '[\s\-,:!=\[\]()"/]+|(?!\b)(?=[A-Z][a-z])|\.(?!\d)|&[lg]t;'
|
||||
- minify:
|
||||
minify_html: true
|
||||
- git-revision-date-localized:
|
||||
enable_creation_date: true
|
||||
type: datetime
|
||||
timezone: UTC
|
||||
- awesome-pages
|
||||
- glightbox
|
||||
- mkdocs-video
|
||||
- social:
|
||||
cards_layout_options:
|
||||
font_family: Roboto
|
||||
|
||||
# Customization
|
||||
extra:
|
||||
analytics:
|
||||
provider: google
|
||||
property: !ENV GOOGLE_ANALYTICS_KEY
|
||||
social:
|
||||
- icon: fontawesome/brands/github
|
||||
link: https://github.com/aitbc
|
||||
- icon: fontawesome/brands/twitter
|
||||
link: https://twitter.com/aitbc
|
||||
- icon: fontawesome/brands/discord
|
||||
link: https://discord.gg/aitbc
|
||||
version:
|
||||
provider: mike
|
||||
default: stable
|
||||
generator: false
|
||||
|
||||
# Extensions
|
||||
markdown_extensions:
|
||||
- abbr
|
||||
- admonition
|
||||
- attr_list
|
||||
- def_list
|
||||
- footnotes
|
||||
- md_in_html
|
||||
- toc:
|
||||
permalink: true
|
||||
- pymdownx.arithmatex:
|
||||
generic: true
|
||||
- pymdownx.betterem:
|
||||
smart_enable: all
|
||||
- pymdownx.caret
|
||||
- pymdownx.details
|
||||
- pymdownx.emoji:
|
||||
emoji_generator: !!python/name:material.extensions.emoji.to_svg
|
||||
emoji_index: !!python/name:material.extensions.emoji.twemoji
|
||||
- pymdownx.highlight:
|
||||
anchor_linenums: true
|
||||
line_spans: __span
|
||||
pygments_lang_class: true
|
||||
- pymdownx.inlinehilite
|
||||
- pymdownx.keys
|
||||
- pymdownx.magiclink:
|
||||
repo_url_shorthand: true
|
||||
user: aitbc
|
||||
repo: docs
|
||||
- pymdownx.mark
|
||||
- pymdownx.smartsymbols
|
||||
- pymdownx.superfences:
|
||||
custom_fences:
|
||||
- name: mermaid
|
||||
class: mermaid
|
||||
format: !!python/name:pymdownx.superfences.fence_code_format
|
||||
- pymdownx.tabbed:
|
||||
alternate_style: true
|
||||
- pymdownx.tasklist:
|
||||
custom_checkbox: true
|
||||
- pymdownx.tilde
|
||||
|
||||
# Navigation
|
||||
nav:
|
||||
- Home: index.md
|
||||
- Getting Started:
|
||||
- Introduction: getting-started/introduction.md
|
||||
- Quickstart: getting-started/quickstart.md
|
||||
- Installation: getting-started/installation.md
|
||||
- Architecture: getting-started/architecture.md
|
||||
- User Guide:
|
||||
- Overview: user-guide/overview.md
|
||||
- Creating Jobs: user-guide/creating-jobs.md
|
||||
- Marketplace: user-guide/marketplace.md
|
||||
- Explorer: user-guide/explorer.md
|
||||
- Wallet Management: user-guide/wallet-management.md
|
||||
- Developer Guide:
|
||||
- Overview: developer-guide/overview.md
|
||||
- Setup: developer-guide/setup.md
|
||||
- API Authentication: developer-guide/api-authentication.md
|
||||
- SDKs:
|
||||
- Python SDK: developer-guide/sdks/python.md
|
||||
- JavaScript SDK: developer-guide/sdks/javascript.md
|
||||
- Examples: developer-guide/examples.md
|
||||
- Contributing: developer-guide/contributing.md
|
||||
- API Reference:
|
||||
- Coordinator API:
|
||||
- Overview: api/coordinator/overview.md
|
||||
- Authentication: api/coordinator/authentication.md
|
||||
- Endpoints: api/coordinator/endpoints.md
|
||||
- OpenAPI Spec: api/coordinator/openapi.md
|
||||
- Blockchain Node API:
|
||||
- Overview: api/blockchain/overview.md
|
||||
- WebSocket API: api/blockchain/websocket.md
|
||||
- JSON-RPC API: api/blockchain/jsonrpc.md
|
||||
- OpenAPI Spec: api/blockchain/openapi.md
|
||||
- Wallet Daemon API:
|
||||
- Overview: api/wallet/overview.md
|
||||
- Endpoints: api/wallet/endpoints.md
|
||||
- OpenAPI Spec: api/wallet/openapi.md
|
||||
- Operations:
|
||||
- Deployment: operations/deployment.md
|
||||
- Monitoring: operations/monitoring.md
|
||||
- Security: operations/security.md
|
||||
- Backup & Restore: operations/backup-restore.md
|
||||
- Troubleshooting: operations/troubleshooting.md
|
||||
- Tutorials:
|
||||
- Building a DApp: tutorials/building-dapp.md
|
||||
- Mining Setup: tutorials/mining-setup.md
|
||||
- Running a Node: tutorials/running-node.md
|
||||
- Integration Examples: tutorials/integration-examples.md
|
||||
- Resources:
|
||||
- Glossary: resources/glossary.md
|
||||
- FAQ: resources/faq.md
|
||||
- Support: resources/support.md
|
||||
- Changelog: resources/changelog.md
|
||||
|
||||
# Page tree
|
||||
plugins:
|
||||
- awesome-pages
|
||||
316
docs/operator/backup_restore.md
Normal file
316
docs/operator/backup_restore.md
Normal file
@ -0,0 +1,316 @@
|
||||
# AITBC Backup and Restore Procedures
|
||||
|
||||
This document outlines the backup and restore procedures for all AITBC system components including PostgreSQL, Redis, and blockchain ledger storage.
|
||||
|
||||
## Overview
|
||||
|
||||
The AITBC platform implements a comprehensive backup strategy with:
|
||||
- **Automated daily backups** via Kubernetes CronJobs
|
||||
- **Manual backup capabilities** for on-demand operations
|
||||
- **Incremental and full backup options** for ledger data
|
||||
- **Cloud storage integration** for off-site backups
|
||||
- **Retention policies** to manage storage efficiently
|
||||
|
||||
## Components
|
||||
|
||||
### 1. PostgreSQL Database
|
||||
- **Location**: Coordinator API persistent storage
|
||||
- **Data**: Jobs, marketplace offers/bids, user sessions, configuration
|
||||
- **Backup Format**: Custom PostgreSQL dump with compression
|
||||
- **Retention**: 30 days (configurable)
|
||||
|
||||
### 2. Redis Cache
|
||||
- **Location**: In-memory cache with persistence
|
||||
- **Data**: Session cache, temporary data, rate limiting
|
||||
- **Backup Format**: RDB snapshot + AOF (if enabled)
|
||||
- **Retention**: 30 days (configurable)
|
||||
|
||||
### 3. Ledger Storage
|
||||
- **Location**: Blockchain node persistent storage
|
||||
- **Data**: Blocks, transactions, receipts, wallet states
|
||||
- **Backup Format**: Compressed tar archives
|
||||
- **Retention**: 30 days (configurable)
|
||||
|
||||
## Automated Backups
|
||||
|
||||
### Kubernetes CronJob
|
||||
|
||||
The automated backup system runs daily at 2:00 AM UTC:
|
||||
|
||||
```bash
|
||||
# Deploy the backup CronJob
|
||||
kubectl apply -f infra/k8s/backup-cronjob.yaml
|
||||
|
||||
# Check CronJob status
|
||||
kubectl get cronjob aitbc-backup
|
||||
|
||||
# View backup jobs
|
||||
kubectl get jobs -l app=aitbc-backup
|
||||
|
||||
# View backup logs
|
||||
kubectl logs job/aitbc-backup-<timestamp>
|
||||
```
|
||||
|
||||
### Backup Schedule
|
||||
|
||||
| Time (UTC) | Component | Type | Retention |
|
||||
|------------|----------------|------------|-----------|
|
||||
| 02:00 | PostgreSQL | Full | 30 days |
|
||||
| 02:01 | Redis | Full | 30 days |
|
||||
| 02:02 | Ledger | Full | 30 days |
|
||||
|
||||
## Manual Backups
|
||||
|
||||
### PostgreSQL
|
||||
|
||||
```bash
|
||||
# Create a manual backup
|
||||
./infra/scripts/backup_postgresql.sh default my-backup-$(date +%Y%m%d)
|
||||
|
||||
# View available backups
|
||||
ls -la /tmp/postgresql-backups/
|
||||
|
||||
# Upload to S3 manually
|
||||
aws s3 cp /tmp/postgresql-backups/my-backup.sql.gz s3://aitbc-backups-default/postgresql/
|
||||
```
|
||||
|
||||
### Redis
|
||||
|
||||
```bash
|
||||
# Create a manual backup
|
||||
./infra/scripts/backup_redis.sh default my-redis-backup-$(date +%Y%m%d)
|
||||
|
||||
# Force background save before backup
|
||||
kubectl exec -n default deployment/redis -- redis-cli BGSAVE
|
||||
```
|
||||
|
||||
### Ledger Storage
|
||||
|
||||
```bash
|
||||
# Create a full backup
|
||||
./infra/scripts/backup_ledger.sh default my-ledger-backup-$(date +%Y%m%d)
|
||||
|
||||
# Create incremental backup
|
||||
./infra/scripts/backup_ledger.sh default incremental-backup-$(date +%Y%m%d) true
|
||||
```
|
||||
|
||||
## Restore Procedures
|
||||
|
||||
### PostgreSQL Restore
|
||||
|
||||
```bash
|
||||
# List available backups
|
||||
aws s3 ls s3://aitbc-backups-default/postgresql/
|
||||
|
||||
# Download backup from S3
|
||||
aws s3 cp s3://aitbc-backups-default/postgresql/postgresql-backup-20231222_020000.sql.gz /tmp/
|
||||
|
||||
# Restore database
|
||||
./infra/scripts/restore_postgresql.sh default /tmp/postgresql-backup-20231222_020000.sql.gz
|
||||
|
||||
# Verify restore
|
||||
kubectl exec -n default deployment/coordinator-api -- curl -s http://localhost:8011/v1/health
|
||||
```
|
||||
|
||||
### Redis Restore
|
||||
|
||||
```bash
|
||||
# Stop Redis service
|
||||
kubectl scale deployment redis --replicas=0 -n default
|
||||
|
||||
# Clear existing data
|
||||
kubectl exec -n default deployment/redis -- rm -f /data/dump.rdb /data/appendonly.aof
|
||||
|
||||
# Copy backup file
|
||||
kubectl cp /tmp/redis-backup.rdb default/redis-0:/data/dump.rdb
|
||||
|
||||
# Start Redis service
|
||||
kubectl scale deployment redis --replicas=1 -n default
|
||||
|
||||
# Verify restore
|
||||
kubectl exec -n default deployment/redis -- redis-cli DBSIZE
|
||||
```
|
||||
|
||||
### Ledger Restore
|
||||
|
||||
```bash
|
||||
# Stop blockchain nodes
|
||||
kubectl scale deployment blockchain-node --replicas=0 -n default
|
||||
|
||||
# Extract backup
|
||||
tar -xzf /tmp/ledger-backup-20231222_020000.tar.gz -C /tmp/
|
||||
|
||||
# Copy ledger data
|
||||
kubectl cp /tmp/chain/ default/blockchain-node-0:/app/data/chain/
|
||||
kubectl cp /tmp/wallets/ default/blockchain-node-0:/app/data/wallets/
|
||||
kubectl cp /tmp/receipts/ default/blockchain-node-0:/app/data/receipts/
|
||||
|
||||
# Start blockchain nodes
|
||||
kubectl scale deployment blockchain-node --replicas=3 -n default
|
||||
|
||||
# Verify restore
|
||||
kubectl exec -n default deployment/blockchain-node -- curl -s http://localhost:8080/v1/blocks/head
|
||||
```
|
||||
|
||||
## Disaster Recovery
|
||||
|
||||
### Recovery Time Objective (RTO)
|
||||
|
||||
| Component | RTO Target | Notes |
|
||||
|----------------|------------|---------------------------------|
|
||||
| PostgreSQL | 1 hour | Database restore from backup |
|
||||
| Redis | 15 minutes | Cache rebuild from backup |
|
||||
| Ledger | 2 hours | Full chain synchronization |
|
||||
|
||||
### Recovery Point Objective (RPO)
|
||||
|
||||
| Component | RPO Target | Notes |
|
||||
|----------------|------------|---------------------------------|
|
||||
| PostgreSQL | 24 hours | Daily backups |
|
||||
| Redis | 24 hours | Daily backups |
|
||||
| Ledger | 24 hours | Daily full + incremental backups|
|
||||
|
||||
### Disaster Recovery Steps
|
||||
|
||||
1. **Assess Impact**
|
||||
```bash
|
||||
# Check component status
|
||||
kubectl get pods -n default
|
||||
kubectl get events --sort-by=.metadata.creationTimestamp
|
||||
```
|
||||
|
||||
2. **Restore Critical Services**
|
||||
```bash
|
||||
# Restore PostgreSQL first (critical for operations)
|
||||
./infra/scripts/restore_postgresql.sh default [latest-backup]
|
||||
|
||||
# Restore Redis cache
|
||||
./restore_redis.sh default [latest-backup]
|
||||
|
||||
# Restore ledger data
|
||||
./restore_ledger.sh default [latest-backup]
|
||||
```
|
||||
|
||||
3. **Verify System Health**
|
||||
```bash
|
||||
# Check all services
|
||||
kubectl get pods -n default
|
||||
|
||||
# Verify API endpoints
|
||||
curl -s http://coordinator-api:8011/v1/health
|
||||
curl -s http://blockchain-node:8080/v1/health
|
||||
```
|
||||
|
||||
## Monitoring and Alerting
|
||||
|
||||
### Backup Monitoring
|
||||
|
||||
Prometheus metrics track backup success/failure:
|
||||
|
||||
```yaml
|
||||
# AlertManager rules for backups
|
||||
- alert: BackupFailed
|
||||
expr: backup_success == 0
|
||||
for: 5m
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
summary: "Backup failed for {{ $labels.component }}"
|
||||
description: "Backup for {{ $labels.component }} has failed for 5 minutes"
|
||||
```
|
||||
|
||||
### Log Monitoring
|
||||
|
||||
```bash
|
||||
# View backup logs
|
||||
kubectl logs -l app=aitbc-backup -n default --tail=100
|
||||
|
||||
# Monitor backup CronJob
|
||||
kubectl get cronjob aitbc-backup -w
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Backup Security
|
||||
|
||||
1. **Encryption**: Backups uploaded to S3 use server-side encryption
|
||||
2. **Access Control**: IAM policies restrict backup access
|
||||
3. **Retention**: Automatic cleanup of old backups
|
||||
4. **Validation**: Regular restore testing
|
||||
|
||||
### Performance Considerations
|
||||
|
||||
1. **Off-Peak Backups**: Scheduled during low traffic (2 AM UTC)
|
||||
2. **Parallel Processing**: Components backed up sequentially
|
||||
3. **Compression**: All backups compressed to save storage
|
||||
4. **Incremental Backups**: Ledger supports incremental to reduce size
|
||||
|
||||
### Testing
|
||||
|
||||
1. **Monthly Restore Tests**: Validate backup integrity
|
||||
2. **Disaster Recovery Drills**: Quarterly full scenario testing
|
||||
3. **Documentation Updates**: Keep procedures current
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
#### Backup Fails with "Permission Denied"
|
||||
```bash
|
||||
# Check service account permissions
|
||||
kubectl describe serviceaccount backup-service-account
|
||||
kubectl describe role backup-role
|
||||
```
|
||||
|
||||
#### Restore Fails with "Database in Use"
|
||||
```bash
|
||||
# Scale down application before restore
|
||||
kubectl scale deployment coordinator-api --replicas=0
|
||||
# Perform restore
|
||||
# Scale up after restore
|
||||
kubectl scale deployment coordinator-api --replicas=3
|
||||
```
|
||||
|
||||
#### Ledger Restore Incomplete
|
||||
```bash
|
||||
# Verify backup integrity
|
||||
tar -tzf ledger-backup.tar.gz
|
||||
# Check metadata.json for block height
|
||||
cat metadata.json | jq '.latest_block_height'
|
||||
```
|
||||
|
||||
### Getting Help
|
||||
|
||||
1. Check logs: `kubectl logs -l app=aitbc-backup`
|
||||
2. Verify storage: `df -h` on backup nodes
|
||||
3. Check network: Test S3 connectivity
|
||||
4. Review events: `kubectl get events --sort-by=.metadata.creationTimestamp`
|
||||
|
||||
## Configuration
|
||||
|
||||
### Environment Variables
|
||||
|
||||
| Variable | Default | Description |
|
||||
|------------------------|------------------|---------------------------------|
|
||||
| BACKUP_RETENTION_DAYS | 30 | Days to keep backups |
|
||||
| BACKUP_SCHEDULE | 0 2 * * * | Cron schedule for backups |
|
||||
| S3_BUCKET_PREFIX | aitbc-backups | S3 bucket name prefix |
|
||||
| COMPRESSION_LEVEL | 6 | gzip compression level |
|
||||
|
||||
### Customizing Backup Schedule
|
||||
|
||||
Edit the CronJob schedule in `infra/k8s/backup-cronjob.yaml`:
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
schedule: "0 3 * * *" # Change to 3 AM UTC
|
||||
```
|
||||
|
||||
### Adjusting Retention
|
||||
|
||||
Modify retention in each backup script:
|
||||
|
||||
```bash
|
||||
# In backup_*.sh scripts
|
||||
RETENTION_DAYS=60 # Keep for 60 days instead of 30
|
||||
```
|
||||
273
docs/operator/beta-release-plan.md
Normal file
273
docs/operator/beta-release-plan.md
Normal file
@ -0,0 +1,273 @@
|
||||
# AITBC Beta Release Plan
|
||||
|
||||
## Executive Summary
|
||||
|
||||
This document outlines the beta release plan for AITBC (AI Trusted Blockchain Computing), a blockchain platform designed for AI workloads. The release follows a phased approach: Alpha → Beta → Release Candidate (RC) → General Availability (GA).
|
||||
|
||||
## Release Phases
|
||||
|
||||
### Phase 1: Alpha Release (Completed)
|
||||
- **Duration**: 2 weeks
|
||||
- **Participants**: Internal team (10 members)
|
||||
- **Focus**: Core functionality validation
|
||||
- **Status**: ✅ Completed
|
||||
|
||||
### Phase 2: Beta Release (Current)
|
||||
- **Duration**: 6 weeks
|
||||
- **Participants**: 50-100 external testers
|
||||
- **Focus**: User acceptance testing, performance validation, security assessment
|
||||
- **Start Date**: 2025-01-15
|
||||
- **End Date**: 2025-02-26
|
||||
|
||||
### Phase 3: Release Candidate
|
||||
- **Duration**: 2 weeks
|
||||
- **Participants**: 20 selected beta testers
|
||||
- **Focus**: Final bug fixes, performance optimization
|
||||
- **Start Date**: 2025-03-04
|
||||
- **End Date**: 2025-03-18
|
||||
|
||||
### Phase 4: General Availability
|
||||
- **Date**: 2025-03-25
|
||||
- **Target**: Public launch
|
||||
|
||||
## Beta Release Timeline
|
||||
|
||||
### Week 1-2: Onboarding & Basic Flows
|
||||
- **Jan 15-19**: Tester onboarding and environment setup
|
||||
- **Jan 22-26**: Basic job submission and completion flows
|
||||
- **Milestone**: 80% of testers successfully submit and complete jobs
|
||||
|
||||
### Week 3-4: Marketplace & Explorer Testing
|
||||
- **Jan 29 - Feb 2**: Marketplace functionality testing
|
||||
- **Feb 5-9**: Explorer UI validation and transaction tracking
|
||||
- **Milestone**: 100 marketplace transactions completed
|
||||
|
||||
### Week 5-6: Stress Testing & Feedback
|
||||
- **Feb 12-16**: Performance stress testing (1000+ concurrent jobs)
|
||||
- **Feb 19-23**: Security testing and final feedback collection
|
||||
- **Milestone**: All critical bugs resolved
|
||||
|
||||
## User Acceptance Testing (UAT) Scenarios
|
||||
|
||||
### 1. Core Job Lifecycle
|
||||
- **Scenario**: Submit AI inference job → Miner picks up → Execution → Results delivery → Payment
|
||||
- **Test Cases**:
|
||||
- Job submission with various model types
|
||||
- Job monitoring and status tracking
|
||||
- Result retrieval and verification
|
||||
- Payment processing and wallet updates
|
||||
- **Success Criteria**: 95% success rate across 1000 test jobs
|
||||
|
||||
### 2. Marketplace Operations
|
||||
- **Scenario**: Create offer → Accept offer → Execute job → Complete transaction
|
||||
- **Test Cases**:
|
||||
- Offer creation and management
|
||||
- Bid acceptance and matching
|
||||
- Price discovery mechanisms
|
||||
- Dispute resolution
|
||||
- **Success Criteria**: 50 successful marketplace transactions
|
||||
|
||||
### 3. Explorer Functionality
|
||||
- **Scenario**: Transaction lookup → Job tracking → Address analysis
|
||||
- **Test Cases**:
|
||||
- Real-time transaction monitoring
|
||||
- Job history and status visualization
|
||||
- Wallet balance tracking
|
||||
- Block explorer features
|
||||
- **Success Criteria**: All transactions visible within 5 seconds
|
||||
|
||||
### 4. Wallet Management
|
||||
- **Scenario**: Wallet creation → Funding → Transactions → Backup/Restore
|
||||
- **Test Cases**:
|
||||
- Multi-signature wallet creation
|
||||
- Cross-chain transfers
|
||||
- Backup and recovery procedures
|
||||
- Staking and unstaking operations
|
||||
- **Success Criteria**: 100% wallet recovery success rate
|
||||
|
||||
### 5. Mining Operations
|
||||
- **Scenario**: Miner setup → Job acceptance → Mining rewards → Pool participation
|
||||
- **Test Cases**:
|
||||
- Miner registration and setup
|
||||
- Job bidding and execution
|
||||
- Reward distribution
|
||||
- Pool mining operations
|
||||
- **Success Criteria**: 90% of submitted jobs accepted by miners
|
||||
|
||||
### 6. Community Management
|
||||
|
||||
### Discord Community Structure
|
||||
- **#announcements**: Official updates and milestones
|
||||
- **#beta-testers**: Private channel for testers only
|
||||
- **#bug-reports**: Structured bug reporting format
|
||||
- **#feature-feedback**: Feature requests and discussions
|
||||
- **#technical-support**: 24/7 support from the team
|
||||
|
||||
### Regulatory Considerations
|
||||
- **KYC/AML**: Basic identity verification for testers
|
||||
- **Securities Law**: Beta tokens have no monetary value
|
||||
- **Tax Reporting**: Testnet transactions not taxable
|
||||
- **Export Controls**: Compliance with technology export laws
|
||||
|
||||
### Geographic Restrictions
|
||||
Beta testing is not available in:
|
||||
- North Korea, Iran, Cuba, Syria, Crimea
|
||||
- Countries under US sanctions
|
||||
- Jurdictions with unclear crypto regulations
|
||||
|
||||
### 7. Token Economics Validation
|
||||
- **Scenario**: Token issuance → Reward distribution → Staking yields → Fee mechanisms
|
||||
- **Test Cases**:
|
||||
- Mining reward calculations match whitepaper specs
|
||||
- Staking yields and unstaking penalties
|
||||
- Transaction fee burning and distribution
|
||||
- Marketplace fee structures
|
||||
- Token inflation/deflation mechanics
|
||||
- **Success Criteria**: All token operations within 1% of theoretical values
|
||||
|
||||
## Performance Benchmarks (Go/No-Go Criteria)
|
||||
|
||||
### Must-Have Metrics
|
||||
- **Transaction Throughput**: ≥ 100 TPS (Transactions Per Second)
|
||||
- **Job Completion Time**: ≤ 5 minutes for standard inference jobs
|
||||
- **API Response Time**: ≤ 200ms (95th percentile)
|
||||
- **System Uptime**: ≥ 99.9% during beta period
|
||||
- **MTTR (Mean Time To Recovery)**: ≤ 2 minutes (from chaos tests)
|
||||
|
||||
### Nice-to-Have Metrics
|
||||
- **Transaction Throughput**: ≥ 500 TPS
|
||||
- **Job Completion Time**: ≤ 2 minutes
|
||||
- **API Response Time**: ≤ 100ms (95th percentile)
|
||||
- **Concurrent Users**: ≥ 1000 simultaneous users
|
||||
|
||||
## Security Testing
|
||||
|
||||
### Automated Security Scans
|
||||
- **Smart Contract Audits**: Completed by [Security Firm]
|
||||
- **Penetration Testing**: OWASP Top 10 validation
|
||||
- **Dependency Scanning**: CVE scan of all dependencies
|
||||
- **Chaos Testing**: Network partition and coordinator outage scenarios
|
||||
|
||||
### Manual Security Reviews
|
||||
- **Authorization Testing**: API key validation and permissions
|
||||
- **Data Privacy**: GDPR compliance validation
|
||||
- **Cryptography**: Proof verification and signature validation
|
||||
- **Infrastructure Security**: Kubernetes and cloud security review
|
||||
|
||||
## Test Environment Setup
|
||||
|
||||
### Beta Environment
|
||||
- **Network**: Separate testnet with faucet for test tokens
|
||||
- **Infrastructure**: Production-like setup with monitoring
|
||||
- **Data**: Reset weekly to ensure clean testing
|
||||
- **Support**: 24/7 Discord support channel
|
||||
|
||||
### Access Credentials
|
||||
- **Testnet Faucet**: 1000 AITBC tokens per tester
|
||||
- **API Keys**: Unique keys per tester with rate limits
|
||||
- **Wallet Seeds**: Generated per tester with backup instructions
|
||||
- **Mining Accounts**: Pre-configured mining pools for testing
|
||||
|
||||
## Feedback Collection Mechanisms
|
||||
|
||||
### Automated Collection
|
||||
- **Error Reporting**: Automatic crash reports and error logs
|
||||
- **Performance Metrics**: Client-side performance data
|
||||
- **Usage Analytics**: Feature usage tracking (anonymized)
|
||||
- **Survey System**: In-app feedback prompts
|
||||
|
||||
### Manual Collection
|
||||
- **Weekly Surveys**: Structured feedback on specific features
|
||||
- **Discord Channels**: Real-time feedback and discussions
|
||||
- **Office Hours**: Weekly Q&A sessions with the team
|
||||
- **Bug Bounty**: Program for critical issue discovery
|
||||
|
||||
## Success Criteria
|
||||
|
||||
### Go/No-Go Decision Points
|
||||
|
||||
#### Week 2 Checkpoint (Jan 26)
|
||||
- **Go Criteria**: 80% of testers onboarded, basic flows working
|
||||
- **Blockers**: Critical bugs in job submission/completion
|
||||
|
||||
#### Week 4 Checkpoint (Feb 9)
|
||||
- **Go Criteria**: 50 marketplace transactions, explorer functional
|
||||
- **Blockers**: Security vulnerabilities, performance < 50 TPS
|
||||
|
||||
#### Week 6 Final Decision (Feb 23)
|
||||
- **Go Criteria**: All UAT scenarios passed, benchmarks met
|
||||
- **Blockers**: Any critical security issue, MTTR > 5 minutes
|
||||
|
||||
### Overall Success Metrics
|
||||
- **User Satisfaction**: ≥ 4.0/5.0 average rating
|
||||
- **Bug Resolution**: 90% of reported bugs fixed
|
||||
- **Performance**: All benchmarks met
|
||||
- **Security**: No critical vulnerabilities
|
||||
|
||||
## Risk Management
|
||||
|
||||
### Technical Risks
|
||||
- **Consensus Issues**: Rollback to previous version
|
||||
- **Performance Degradation**: Auto-scaling and optimization
|
||||
- **Security Breaches**: Immediate patch and notification
|
||||
|
||||
### Operational Risks
|
||||
- **Test Environment Downtime**: Backup environment ready
|
||||
- **Low Tester Participation**: Incentive program adjustments
|
||||
- **Feature Scope Creep**: Strict feature freeze after Week 4
|
||||
|
||||
### Mitigation Strategies
|
||||
- **Daily Health Checks**: Automated monitoring and alerts
|
||||
- **Rollback Plan**: Documented procedures for quick rollback
|
||||
- **Communication Plan**: Regular updates to all stakeholders
|
||||
|
||||
## Communication Plan
|
||||
|
||||
### Internal Updates
|
||||
- **Daily Standups**: Development team sync
|
||||
- **Weekly Reports**: Progress to leadership
|
||||
- **Bi-weekly Demos**: Feature demonstrations
|
||||
|
||||
### External Updates
|
||||
- **Beta Newsletter**: Weekly updates to testers
|
||||
- **Blog Posts**: Public progress updates
|
||||
- **Social Media**: Regular platform updates
|
||||
|
||||
## Post-Beta Activities
|
||||
|
||||
### RC Phase Preparation
|
||||
- **Bug Triage**: Prioritize and assign all reported issues
|
||||
- **Performance Tuning**: Optimize based on beta metrics
|
||||
- **Documentation Updates**: Incorporate beta feedback
|
||||
|
||||
### GA Preparation
|
||||
- **Final Security Review**: Complete audit and penetration test
|
||||
- **Infrastructure Scaling**: Prepare for production load
|
||||
- **Support Team Training**: Enable customer support team
|
||||
|
||||
## Appendix
|
||||
|
||||
### A. Test Case Matrix
|
||||
[Detailed test case spreadsheet link]
|
||||
|
||||
### B. Performance Benchmark Results
|
||||
[Benchmark data and graphs]
|
||||
|
||||
### C. Security Audit Reports
|
||||
[Audit firm reports and findings]
|
||||
|
||||
### D. Feedback Analysis
|
||||
[Summary of all user feedback and actions taken]
|
||||
|
||||
## Contact Information
|
||||
|
||||
- **Beta Program Manager**: beta@aitbc.io
|
||||
- **Technical Support**: support@aitbc.io
|
||||
- **Security Issues**: security@aitbc.io
|
||||
- **Discord Community**: https://discord.gg/aitbc
|
||||
|
||||
---
|
||||
|
||||
*Last Updated: 2025-01-10*
|
||||
*Version: 1.0*
|
||||
*Next Review: 2025-01-17*
|
||||
@ -6,17 +6,21 @@ This document tracks current and planned TCP port assignments across the AITBC d
|
||||
|
||||
| Port | Service | Location | Notes |
|
||||
| --- | --- | --- | --- |
|
||||
| 8011 | Coordinator API (dev) | `apps/coordinator-api/` | Development coordinator API with job and marketplace endpoints. |
|
||||
| 8071 | Wallet Daemon API | `apps/wallet-daemon/` | REST and JSON-RPC wallet service with receipt verification. |
|
||||
| 8080 | Blockchain RPC API (FastAPI) | `apps/blockchain-node/scripts/devnet_up.sh` → `python -m uvicorn aitbc_chain.app:app` | Exposes REST/WebSocket RPC endpoints for blocks, transactions, receipts. |
|
||||
| 8090 | Mock Coordinator API | `apps/blockchain-node/scripts/devnet_up.sh` → `uvicorn mock_coordinator:app` | Generates synthetic coordinator/miner telemetry consumed by Grafana dashboards. |
|
||||
| 9090 | Prometheus (planned default) | `apps/blockchain-node/observability/` (targets to be wired) | Scrapes blockchain node + mock coordinator metrics. Ensure firewall allows local-only access. |
|
||||
| 3000 | Grafana (planned default) | `apps/blockchain-node/observability/grafana-dashboard.json` | Visualizes metrics dashboards; behind devnet Docker compose or local binary. |
|
||||
| 8100 | Pool Hub API (planned) | `apps/pool-hub/` | FastAPI service for miner registry and matching. |
|
||||
| 8900 | Coordinator API (production) | `apps/coordinator-api/` | Production-style deployment port. |
|
||||
| 9090 | Prometheus | `apps/blockchain-node/observability/` | Scrapes blockchain node + mock coordinator metrics. |
|
||||
| 3000 | Grafana | `apps/blockchain-node/observability/` | Visualizes metrics dashboards for blockchain and coordinator. |
|
||||
| 4173 | Explorer Web (dev) | `apps/explorer-web/` | Vite dev server for blockchain explorer interface. |
|
||||
| 5173 | Marketplace Web (dev) | `apps/marketplace-web/` | Vite dev server for marketplace interface. |
|
||||
|
||||
## Reserved / Planned Ports
|
||||
|
||||
- **Coordinator API (production)** – TBD (`8000` suggested). Align with `apps/coordinator-api/README.md` once the service runs outside mock mode.
|
||||
- **Marketplace Web** – Vite dev server defaults to `5173`; document overrides when deploying behind nginx.
|
||||
- **Explorer Web** – Vite dev server defaults to `4173`; ensure it does not collide with other tooling on developer machines.
|
||||
- **Pool Hub API** – Reserve `8100` for the FastAPI service when devnet integration begins.
|
||||
- **Miner Node** – No default port (connects to coordinator via HTTP).
|
||||
- **JavaScript/Python SDKs** – Client libraries, no dedicated ports.
|
||||
|
||||
## Guidance
|
||||
|
||||
@ -26,11 +26,11 @@ These instructions cover the newly scaffolded services. Install dependencies usi
|
||||
|
||||
5. Run the API locally (development):
|
||||
```bash
|
||||
poetry run uvicorn app.main:app --host 0.0.0.0 --port 8011 --reload
|
||||
poetry run uvicorn app.main:app --host 127.0.0.2 --port 8011 --reload
|
||||
```
|
||||
6. Production-style launch using Gunicorn (ports start at 8900):
|
||||
```bash
|
||||
poetry run gunicorn app.main:app -k uvicorn.workers.UvicornWorker -b 0.0.0.0:8900
|
||||
poetry run gunicorn app.main:app -k uvicorn.workers.UvicornWorker -b 127.0.0.2:8900
|
||||
```
|
||||
7. Generate a signing key (optional):
|
||||
```bash
|
||||
@ -165,7 +165,7 @@ These instructions cover the newly scaffolded services. Install dependencies usi
|
||||
Populate `COORDINATOR_BASE_URL` and `COORDINATOR_API_KEY` to reuse the coordinator API when verifying receipts.
|
||||
4. Run the API locally:
|
||||
```bash
|
||||
poetry run uvicorn app.main:app --host 0.0.0.0 --port 8071 --reload
|
||||
poetry run uvicorn app.main:app --host 127.0.0.2 --port 8071 --reload
|
||||
```
|
||||
5. REST endpoints:
|
||||
- `GET /v1/receipts/{job_id}` – fetch + verify latest coordinator receipt.
|
||||
485
docs/operator/incident-runbooks.md
Normal file
485
docs/operator/incident-runbooks.md
Normal file
@ -0,0 +1,485 @@
|
||||
# AITBC Incident Runbooks
|
||||
|
||||
This document contains specific runbooks for common incident scenarios, based on our chaos testing validation.
|
||||
|
||||
## Runbook: Coordinator API Outage
|
||||
|
||||
### Based on Chaos Test: `chaos_test_coordinator.py`
|
||||
|
||||
### Symptoms
|
||||
- 503/504 errors on all endpoints
|
||||
- Health check failures
|
||||
- Job submission failures
|
||||
- Marketplace unresponsive
|
||||
|
||||
### MTTR Target: 2 minutes
|
||||
|
||||
### Immediate Actions (0-2 minutes)
|
||||
```bash
|
||||
# 1. Check pod status
|
||||
kubectl get pods -n default -l app.kubernetes.io/name=coordinator
|
||||
|
||||
# 2. Check recent events
|
||||
kubectl get events -n default --sort-by=.metadata.creationTimestamp | tail -20
|
||||
|
||||
# 3. Check if pods are crashlooping
|
||||
kubectl describe pod -n default -l app.kubernetes.io/name=coordinator
|
||||
|
||||
# 4. Quick restart if needed
|
||||
kubectl rollout restart deployment/coordinator -n default
|
||||
```
|
||||
|
||||
### Investigation (2-10 minutes)
|
||||
1. **Review Logs**
|
||||
```bash
|
||||
kubectl logs -n default deployment/coordinator --tail=100
|
||||
```
|
||||
|
||||
2. **Check Resource Limits**
|
||||
```bash
|
||||
kubectl top pods -n default -l app.kubernetes.io/name=coordinator
|
||||
```
|
||||
|
||||
3. **Verify Database Connectivity**
|
||||
```bash
|
||||
kubectl exec -n default deployment/coordinator -- nc -z postgresql 5432
|
||||
```
|
||||
|
||||
4. **Check Redis Connection**
|
||||
```bash
|
||||
kubectl exec -n default deployment/coordinator -- redis-cli -h redis ping
|
||||
```
|
||||
|
||||
### Recovery Actions
|
||||
1. **Scale Up if Resource Starved**
|
||||
```bash
|
||||
kubectl scale deployment/coordinator --replicas=5 -n default
|
||||
```
|
||||
|
||||
2. **Manual Pod Deletion if Stuck**
|
||||
```bash
|
||||
kubectl delete pods -n default -l app.kubernetes.io/name=coordinator --force --grace-period=0
|
||||
```
|
||||
|
||||
3. **Rollback Deployment**
|
||||
```bash
|
||||
kubectl rollout undo deployment/coordinator -n default
|
||||
```
|
||||
|
||||
### Verification
|
||||
```bash
|
||||
# Test health endpoint
|
||||
curl -f http://127.0.0.2:8011/v1/health
|
||||
|
||||
# Test API with sample request
|
||||
curl -X GET http://127.0.0.2:8011/v1/jobs -H "X-API-Key: test-key"
|
||||
```
|
||||
|
||||
## Runbook: Network Partition
|
||||
|
||||
### Based on Chaos Test: `chaos_test_network.py`
|
||||
|
||||
### Symptoms
|
||||
- Blockchain nodes not communicating
|
||||
- Consensus stalled
|
||||
- High finality latency
|
||||
- Transaction processing delays
|
||||
|
||||
### MTTR Target: 5 minutes
|
||||
|
||||
### Immediate Actions (0-5 minutes)
|
||||
```bash
|
||||
# 1. Check peer connectivity
|
||||
kubectl exec -n default deployment/blockchain-node -- curl -s http://localhost:8080/v1/peers | jq
|
||||
|
||||
# 2. Check consensus status
|
||||
kubectl exec -n default deployment/blockchain-node -- curl -s http://localhost:8080/v1/consensus | jq
|
||||
|
||||
# 3. Check network policies
|
||||
kubectl get networkpolicies -n default
|
||||
```
|
||||
|
||||
### Investigation (5-15 minutes)
|
||||
1. **Identify Partitioned Nodes**
|
||||
```bash
|
||||
# Check each node's peer count
|
||||
for pod in $(kubectl get pods -n default -l app.kubernetes.io/name=blockchain-node -o jsonpath='{.items[*].metadata.name}'); do
|
||||
echo "Pod: $pod"
|
||||
kubectl exec -n default $pod -- curl -s http://localhost:8080/v1/peers | jq '. | length'
|
||||
done
|
||||
```
|
||||
|
||||
2. **Check Network Policies**
|
||||
```bash
|
||||
kubectl describe networkpolicy default-deny-all-ingress -n default
|
||||
kubectl describe networkpolicy blockchain-node-netpol -n default
|
||||
```
|
||||
|
||||
3. **Verify DNS Resolution**
|
||||
```bash
|
||||
kubectl exec -n default deployment/blockchain-node -- nslookup blockchain-node
|
||||
```
|
||||
|
||||
### Recovery Actions
|
||||
1. **Remove Problematic Network Rules**
|
||||
```bash
|
||||
# Flush iptables on affected nodes
|
||||
for pod in $(kubectl get pods -n default -l app.kubernetes.io/name=blockchain-node -o jsonpath='{.items[*].metadata.name}'); do
|
||||
kubectl exec -n default $pod -- iptables -F
|
||||
done
|
||||
```
|
||||
|
||||
2. **Restart Network Components**
|
||||
```bash
|
||||
kubectl rollout restart deployment/blockchain-node -n default
|
||||
```
|
||||
|
||||
3. **Force Re-peering**
|
||||
```bash
|
||||
# Delete and recreate pods to force re-peering
|
||||
kubectl delete pods -n default -l app.kubernetes.io/name=blockchain-node
|
||||
```
|
||||
|
||||
### Verification
|
||||
```bash
|
||||
# Wait for consensus to resume
|
||||
watch -n 5 'kubectl exec -n default deployment/blockchain-node -- curl -s http://localhost:8080/v1/consensus | jq .height'
|
||||
|
||||
# Verify peer connectivity
|
||||
kubectl exec -n default deployment/blockchain-node -- curl -s http://localhost:8080/v1/peers | jq '. | length'
|
||||
```
|
||||
|
||||
## Runbook: Database Failure
|
||||
|
||||
### Based on Chaos Test: `chaos_test_database.py`
|
||||
|
||||
### Symptoms
|
||||
- Database connection errors
|
||||
- Service degradation
|
||||
- Failed transactions
|
||||
- High error rates
|
||||
|
||||
### MTTR Target: 3 minutes
|
||||
|
||||
### Immediate Actions (0-3 minutes)
|
||||
```bash
|
||||
# 1. Check PostgreSQL status
|
||||
kubectl exec -n default deployment/postgresql -- pg_isready
|
||||
|
||||
# 2. Check connection count
|
||||
kubectl exec -n default deployment/postgresql -- psql -U aitbc -c "SELECT count(*) FROM pg_stat_activity;"
|
||||
|
||||
# 3. Check replica lag
|
||||
kubectl exec -n default deployment/postgresql-replica -- psql -U aitbc -c "SELECT pg_last_xact_replay_timestamp();"
|
||||
```
|
||||
|
||||
### Investigation (3-10 minutes)
|
||||
1. **Review Database Logs**
|
||||
```bash
|
||||
kubectl logs -n default deployment/postgresql --tail=100
|
||||
```
|
||||
|
||||
2. **Check Resource Usage**
|
||||
```bash
|
||||
kubectl top pods -n default -l app.kubernetes.io/name=postgresql
|
||||
df -h /var/lib/postgresql/data
|
||||
```
|
||||
|
||||
3. **Identify Long-running Queries**
|
||||
```bash
|
||||
kubectl exec -n default deployment/postgresql -- psql -U aitbc -c "SELECT pid, now() - pg_stat_activity.query_start AS duration, query FROM pg_stat_activity WHERE state = 'active' AND now() - pg_stat_activity.query_start > interval '5 minutes';"
|
||||
```
|
||||
|
||||
### Recovery Actions
|
||||
1. **Kill Idle Connections**
|
||||
```bash
|
||||
kubectl exec -n default deployment/postgresql -- psql -U aitbc -c "SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE state = 'idle' AND query_start < now() - interval '1 hour';"
|
||||
```
|
||||
|
||||
2. **Restart PostgreSQL**
|
||||
```bash
|
||||
kubectl rollout restart deployment/postgresql -n default
|
||||
```
|
||||
|
||||
3. **Failover to Replica**
|
||||
```bash
|
||||
# Promote replica if primary fails
|
||||
kubectl exec -n default deployment/postgresql-replica -- pg_ctl promote -D /var/lib/postgresql/data
|
||||
```
|
||||
|
||||
### Verification
|
||||
```bash
|
||||
# Test database connectivity
|
||||
kubectl exec -n default deployment/coordinator -- python -c "import psycopg2; conn = psycopg2.connect('postgresql://aitbc:password@postgresql:5432/aitbc'); print('Connected')"
|
||||
|
||||
# Check application health
|
||||
curl -f http://127.0.0.2:8011/v1/health
|
||||
```
|
||||
|
||||
## Runbook: Redis Failure
|
||||
|
||||
### Symptoms
|
||||
- Caching failures
|
||||
- Session loss
|
||||
- Increased database load
|
||||
- Slow response times
|
||||
|
||||
### MTTR Target: 2 minutes
|
||||
|
||||
### Immediate Actions (0-2 minutes)
|
||||
```bash
|
||||
# 1. Check Redis status
|
||||
kubectl exec -n default deployment/redis -- redis-cli ping
|
||||
|
||||
# 2. Check memory usage
|
||||
kubectl exec -n default deployment/redis -- redis-cli info memory | grep used_memory_human
|
||||
|
||||
# 3. Check connection count
|
||||
kubectl exec -n default deployment/redis -- redis-cli info clients | grep connected_clients
|
||||
```
|
||||
|
||||
### Investigation (2-5 minutes)
|
||||
1. **Review Redis Logs**
|
||||
```bash
|
||||
kubectl logs -n default deployment/redis --tail=100
|
||||
```
|
||||
|
||||
2. **Check for Eviction**
|
||||
```bash
|
||||
kubectl exec -n default deployment/redis -- redis-cli info stats | grep evicted_keys
|
||||
```
|
||||
|
||||
3. **Identify Large Keys**
|
||||
```bash
|
||||
kubectl exec -n default deployment/redis -- redis-cli --bigkeys
|
||||
```
|
||||
|
||||
### Recovery Actions
|
||||
1. **Clear Expired Keys**
|
||||
```bash
|
||||
kubectl exec -n default deployment/redis -- redis-cli --scan --pattern "*:*" | xargs redis-cli del
|
||||
```
|
||||
|
||||
2. **Restart Redis**
|
||||
```bash
|
||||
kubectl rollout restart deployment/redis -n default
|
||||
```
|
||||
|
||||
3. **Scale Redis Cluster**
|
||||
```bash
|
||||
kubectl scale deployment/redis --replicas=3 -n default
|
||||
```
|
||||
|
||||
### Verification
|
||||
```bash
|
||||
# Test Redis connectivity
|
||||
kubectl exec -n default deployment/coordinator -- redis-cli -h redis ping
|
||||
|
||||
# Check application performance
|
||||
curl -w "@curl-format.txt" -o /dev/null -s http://127.0.0.2:8011/v1/health
|
||||
```
|
||||
|
||||
## Runbook: High CPU/Memory Usage
|
||||
|
||||
### Symptoms
|
||||
- Slow response times
|
||||
- Pod evictions
|
||||
- OOM errors
|
||||
- System degradation
|
||||
|
||||
### MTTR Target: 5 minutes
|
||||
|
||||
### Immediate Actions (0-5 minutes)
|
||||
```bash
|
||||
# 1. Check resource usage
|
||||
kubectl top pods -n default
|
||||
kubectl top nodes
|
||||
|
||||
# 2. Identify resource-hungry pods
|
||||
kubectl exec -n default deployment/coordinator -- top
|
||||
|
||||
# 3. Check for OOM kills
|
||||
dmesg | grep -i "killed process"
|
||||
```
|
||||
|
||||
### Investigation (5-15 minutes)
|
||||
1. **Analyze Resource Usage**
|
||||
```bash
|
||||
# Detailed pod metrics
|
||||
kubectl exec -n default deployment/coordinator -- ps aux --sort=-%cpu | head -10
|
||||
kubectl exec -n default deployment/coordinator -- ps aux --sort=-%mem | head -10
|
||||
```
|
||||
|
||||
2. **Check Resource Limits**
|
||||
```bash
|
||||
kubectl describe pod -n default -l app.kubernetes.io/name=coordinator | grep -A 10 Limits
|
||||
```
|
||||
|
||||
3. **Review Application Metrics**
|
||||
```bash
|
||||
# Check Prometheus metrics
|
||||
curl http://127.0.0.2:8011/metrics | grep -E "(cpu|memory)"
|
||||
```
|
||||
|
||||
### Recovery Actions
|
||||
1. **Scale Services**
|
||||
```bash
|
||||
kubectl scale deployment/coordinator --replicas=5 -n default
|
||||
kubectl scale deployment/blockchain-node --replicas=3 -n default
|
||||
```
|
||||
|
||||
2. **Increase Resource Limits**
|
||||
```bash
|
||||
kubectl patch deployment coordinator -p '{"spec":{"template":{"spec":{"containers":[{"name":"coordinator","resources":{"limits":{"cpu":"2000m","memory":"4Gi"}}}]}}}}'
|
||||
```
|
||||
|
||||
3. **Restart Affected Services**
|
||||
```bash
|
||||
kubectl rollout restart deployment/coordinator -n default
|
||||
```
|
||||
|
||||
### Verification
|
||||
```bash
|
||||
# Monitor resource usage
|
||||
watch -n 5 'kubectl top pods -n default'
|
||||
|
||||
# Test service performance
|
||||
curl -w "@curl-format.txt" -o /dev/null -s http://127.0.0.2:8011/v1/health
|
||||
```
|
||||
|
||||
## Runbook: Storage Issues
|
||||
|
||||
### Symptoms
|
||||
- Disk space warnings
|
||||
- Write failures
|
||||
- Database errors
|
||||
- Pod crashes
|
||||
|
||||
### MTTR Target: 10 minutes
|
||||
|
||||
### Immediate Actions (0-10 minutes)
|
||||
```bash
|
||||
# 1. Check disk usage
|
||||
df -h
|
||||
kubectl exec -n default deployment/postgresql -- df -h
|
||||
|
||||
# 2. Identify large files
|
||||
find /var/log -name "*.log" -size +100M
|
||||
kubectl exec -n default deployment/postgresql -- find /var/lib/postgresql -type f -size +1G
|
||||
|
||||
# 3. Clean up logs
|
||||
kubectl logs -n default deployment/coordinator --tail=1000 > /tmp/coordinator.log && truncate -s 0 /var/log/containers/coordinator*.log
|
||||
```
|
||||
|
||||
### Investigation (10-20 minutes)
|
||||
1. **Analyze Storage Usage**
|
||||
```bash
|
||||
du -sh /var/log/*
|
||||
du -sh /var/lib/docker/*
|
||||
```
|
||||
|
||||
2. **Check PVC Usage**
|
||||
```bash
|
||||
kubectl get pvc -n default
|
||||
kubectl describe pvc postgresql-data -n default
|
||||
```
|
||||
|
||||
3. **Review Retention Policies**
|
||||
```bash
|
||||
kubectl get cronjobs -n default
|
||||
kubectl describe cronjob log-cleanup -n default
|
||||
```
|
||||
|
||||
### Recovery Actions
|
||||
1. **Expand Storage**
|
||||
```bash
|
||||
kubectl patch pvc postgresql-data -p '{"spec":{"resources":{"requests":{"storage":"100Gi"}}}}'
|
||||
```
|
||||
|
||||
2. **Force Cleanup**
|
||||
```bash
|
||||
# Clean old logs
|
||||
find /var/log -name "*.log" -mtime +7 -delete
|
||||
|
||||
# Clean Docker images
|
||||
docker system prune -a
|
||||
```
|
||||
|
||||
3. **Restart Services**
|
||||
```bash
|
||||
kubectl rollout restart deployment/postgresql -n default
|
||||
```
|
||||
|
||||
### Verification
|
||||
```bash
|
||||
# Check disk space
|
||||
df -h
|
||||
|
||||
# Verify database operations
|
||||
kubectl exec -n default deployment/postgresql -- psql -U aitbc -c "SELECT 1;"
|
||||
```
|
||||
|
||||
## Emergency Contact Procedures
|
||||
|
||||
### Escalation Matrix
|
||||
1. **Level 1**: On-call engineer (5 minutes)
|
||||
2. **Level 2**: On-call secondary (15 minutes)
|
||||
3. **Level 3**: Engineering manager (30 minutes)
|
||||
4. **Level 4**: CTO (1 hour, critical only)
|
||||
|
||||
### War Room Activation
|
||||
```bash
|
||||
# Create Slack channel
|
||||
/slack create-channel #incident-$(date +%Y%m%d-%H%M%S)
|
||||
|
||||
# Invite stakeholders
|
||||
/slack invite @sre-team @engineering-manager @cto
|
||||
|
||||
# Start Zoom meeting
|
||||
/zoom start "AITBC Incident War Room"
|
||||
```
|
||||
|
||||
### Customer Communication
|
||||
1. **Status Page Update** (5 minutes)
|
||||
2. **Email Notification** (15 minutes)
|
||||
3. **Twitter Update** (30 minutes, critical only)
|
||||
|
||||
## Post-Incident Checklist
|
||||
|
||||
### Immediate (0-1 hour)
|
||||
- [ ] Service fully restored
|
||||
- [ ] Monitoring normal
|
||||
- [ ] Status page updated
|
||||
- [ ] Stakeholders notified
|
||||
|
||||
### Short-term (1-24 hours)
|
||||
- [ ] Incident document created
|
||||
- [ ] Root cause identified
|
||||
- [ ] Runbooks updated
|
||||
- [ ] Post-mortem scheduled
|
||||
|
||||
### Long-term (1-7 days)
|
||||
- [ ] Post-mortem completed
|
||||
- [ ] Action items assigned
|
||||
- [ ] Monitoring improved
|
||||
- [ ] Process updated
|
||||
|
||||
## Runbook Maintenance
|
||||
|
||||
### Review Schedule
|
||||
- **Monthly**: Review and update runbooks
|
||||
- **Quarterly**: Full review and testing
|
||||
- **Annually**: Major revision
|
||||
|
||||
### Update Process
|
||||
1. Test runbook procedures
|
||||
2. Document lessons learned
|
||||
3. Update procedures
|
||||
4. Train team members
|
||||
5. Update documentation
|
||||
|
||||
---
|
||||
|
||||
*Version: 1.0*
|
||||
*Last Updated: 2024-12-22*
|
||||
*Owner: SRE Team*
|
||||
40
docs/operator/index.md
Normal file
40
docs/operator/index.md
Normal file
@ -0,0 +1,40 @@
|
||||
# AITBC Operator Documentation
|
||||
|
||||
Welcome to the AITBC operator documentation. This section contains resources for deploying, operating, and maintaining AITBC infrastructure.
|
||||
|
||||
## Deployment
|
||||
|
||||
- [Deployment Guide](deployment/run.md) - How to deploy AITBC components
|
||||
- [Installation](deployment/installation.md) - System requirements and installation
|
||||
- [Configuration](deployment/configuration.md) - Configuration options
|
||||
- [Ports](deployment/ports.md) - Network ports and requirements
|
||||
|
||||
## Operations
|
||||
|
||||
- [Backup & Restore](backup_restore.md) - Data backup and recovery procedures
|
||||
- [Security](security.md) - Security best practices and hardening
|
||||
- [Monitoring](monitoring/monitoring-playbook.md) - System monitoring and observability
|
||||
- [Incident Response](incident-runbooks.md) - Incident handling procedures
|
||||
|
||||
## Architecture
|
||||
|
||||
- [System Architecture](../reference/architecture/) - Understanding AITBC architecture
|
||||
- [Components](../reference/architecture/) - Component documentation
|
||||
- [Multi-tenancy](../reference/architecture/) - Multi-tenant infrastructure
|
||||
|
||||
## Scaling
|
||||
|
||||
- [Scaling Guide](scaling.md) - How to scale AITBC infrastructure
|
||||
- [Performance Tuning](performance.md) - Performance optimization
|
||||
- [Capacity Planning](capacity.md) - Resource planning
|
||||
|
||||
## Reference
|
||||
|
||||
- [Glossary](../reference/glossary.md) - Terms and definitions
|
||||
- [Troubleshooting](../user-guide/troubleshooting.md) - Common issues and solutions
|
||||
- [FAQ](../user-guide/faq.md) - Frequently asked questions
|
||||
|
||||
## Support
|
||||
|
||||
- [Getting Help](../user-guide/support.md) - How to get support
|
||||
- [Contact](../user-guide/support.md) - Contact information
|
||||
449
docs/operator/monitoring/monitoring-playbook.md
Normal file
449
docs/operator/monitoring/monitoring-playbook.md
Normal file
@ -0,0 +1,449 @@
|
||||
# AITBC Monitoring Playbook & On-Call Guide
|
||||
|
||||
## Overview
|
||||
|
||||
This document provides comprehensive monitoring procedures, on-call rotations, and incident response playbooks for the AITBC platform. It ensures reliable operation of all services and quick resolution of issues.
|
||||
|
||||
## Service Overview
|
||||
|
||||
### Core Services
|
||||
- **Coordinator API**: Job management and marketplace coordination
|
||||
- **Blockchain Nodes**: Consensus and transaction processing
|
||||
- **Explorer UI**: Block explorer and transaction visualization
|
||||
- **Marketplace UI**: User interface for marketplace operations
|
||||
- **Wallet Daemon**: Cryptographic key management
|
||||
- **Infrastructure**: PostgreSQL, Redis, Kubernetes cluster
|
||||
|
||||
### Critical Metrics
|
||||
- **Availability**: 99.9% uptime SLA
|
||||
- **Performance**: <200ms API response time (95th percentile)
|
||||
- **Throughput**: 100+ TPS sustained
|
||||
- **MTTR**: <2 minutes for critical incidents
|
||||
|
||||
## On-Call Rotation
|
||||
|
||||
### Rotation Schedule
|
||||
- **Primary On-Call**: 1 week rotation, Monday 00:00 UTC to Monday 00:00 UTC
|
||||
- **Secondary On-Call**: Shadow primary, handles escalations
|
||||
- **Tertiary**: Backup for both primary and secondary
|
||||
- **Rotation Handoff**: Every Monday at 08:00 UTC
|
||||
|
||||
### Team Structure
|
||||
```
|
||||
Week 1: Alice (Primary), Bob (Secondary), Carol (Tertiary)
|
||||
Week 2: Bob (Primary), Carol (Secondary), Alice (Tertiary)
|
||||
Week 3: Carol (Primary), Alice (Secondary), Bob (Tertiary)
|
||||
```
|
||||
|
||||
### Handoff Procedures
|
||||
1. **Pre-handoff Check** (Sunday 22:00 UTC):
|
||||
- Review active incidents
|
||||
- Check scheduled maintenance
|
||||
- Verify monitoring systems health
|
||||
|
||||
2. **Handoff Meeting** (Monday 08:00 UTC):
|
||||
- 15-minute video call
|
||||
- Discuss current issues
|
||||
- Transfer knowledge
|
||||
- Confirm contact information
|
||||
|
||||
3. **Post-handoff** (Monday 09:00 UTC):
|
||||
- Primary acknowledges receipt
|
||||
- Update on-call calendar
|
||||
- Test alerting systems
|
||||
|
||||
### Contact Information
|
||||
- **Primary**: +1-555-ONCALL-1 (PagerDuty)
|
||||
- **Secondary**: +1-555-ONCALL-2 (PagerDuty)
|
||||
- **Tertiary**: +1-555-ONCALL-3 (PagerDuty)
|
||||
- **Escalation Manager**: +1-555-ESCALATE
|
||||
- **Emergency**: +1-555-EMERGENCY (Critical infrastructure only)
|
||||
|
||||
## Alerting & Escalation
|
||||
|
||||
### Alert Severity Levels
|
||||
|
||||
#### Critical (P0)
|
||||
- Service completely down
|
||||
- Data loss or corruption
|
||||
- Security breach
|
||||
- SLA violation in progress
|
||||
- **Response Time**: 5 minutes
|
||||
- **Escalation**: 15 minutes if no response
|
||||
|
||||
#### High (P1)
|
||||
- Significant degradation
|
||||
- Partial service outage
|
||||
- High error rates (>10%)
|
||||
- **Response Time**: 15 minutes
|
||||
- **Escalation**: 1 hour if no response
|
||||
|
||||
#### Medium (P2)
|
||||
- Minor degradation
|
||||
- Elevated error rates (5-10%)
|
||||
- Performance issues
|
||||
- **Response Time**: 1 hour
|
||||
- **Escalation**: 4 hours if no response
|
||||
|
||||
#### Low (P3)
|
||||
- Informational alerts
|
||||
- Non-critical issues
|
||||
- **Response Time**: 4 hours
|
||||
- **Escalation**: 24 hours if no response
|
||||
|
||||
### Escalation Policy
|
||||
1. **Level 1**: Primary On-Call (5-60 minutes)
|
||||
2. **Level 2**: Secondary On-Call (15 minutes - 4 hours)
|
||||
3. **Level 3**: Tertiary On-Call (1 hour - 24 hours)
|
||||
4. **Level 4**: Engineering Manager (4 hours)
|
||||
5. **Level 5**: CTO (Critical incidents only)
|
||||
|
||||
### Alert Channels
|
||||
- **PagerDuty**: Primary alerting system
|
||||
- **Slack**: #on-call-aitbc channel
|
||||
- **Email**: oncall@aitbc.io
|
||||
- **SMS**: Critical alerts only
|
||||
- **Phone**: Critical incidents only
|
||||
|
||||
## Incident Response
|
||||
|
||||
### Incident Classification
|
||||
|
||||
#### SEV-0 (Critical)
|
||||
- Complete service outage
|
||||
- Data loss or security breach
|
||||
- Financial impact >$10,000/hour
|
||||
- Customer impact >50%
|
||||
|
||||
#### SEV-1 (High)
|
||||
- Significant service degradation
|
||||
- Feature unavailable
|
||||
- Financial impact $1,000-$10,000/hour
|
||||
- Customer impact 10-50%
|
||||
|
||||
#### SEV-2 (Medium)
|
||||
- Minor service degradation
|
||||
- Performance issues
|
||||
- Financial impact <$1,000/hour
|
||||
- Customer impact <10%
|
||||
|
||||
#### SEV-3 (Low)
|
||||
- Informational
|
||||
- No customer impact
|
||||
|
||||
### Incident Response Process
|
||||
|
||||
#### 1. Detection & Triage (0-5 minutes)
|
||||
```bash
|
||||
# Check alert severity
|
||||
# Verify impact
|
||||
# Create incident channel
|
||||
# Notify stakeholders
|
||||
```
|
||||
|
||||
#### 2. Assessment (5-15 minutes)
|
||||
- Determine scope
|
||||
- Identify root cause area
|
||||
- Estimate resolution time
|
||||
- Declare severity level
|
||||
|
||||
#### 3. Communication (15-30 minutes)
|
||||
- Update status page
|
||||
- Notify customers (if needed)
|
||||
- Internal stakeholder updates
|
||||
- Set up war room
|
||||
|
||||
#### 4. Resolution (Varies)
|
||||
- Implement fix
|
||||
- Verify resolution
|
||||
- Monitor for recurrence
|
||||
- Document actions
|
||||
|
||||
#### 5. Recovery (30-60 minutes)
|
||||
- Full service restoration
|
||||
- Performance validation
|
||||
- Customer communication
|
||||
- Incident closure
|
||||
|
||||
## Service-Specific Runbooks
|
||||
|
||||
### Coordinator API
|
||||
|
||||
#### High Error Rate
|
||||
**Symptoms**: 5xx errors >5%, response time >500ms
|
||||
**Runbook**:
|
||||
1. Check pod health: `kubectl get pods -l app=coordinator`
|
||||
2. Review logs: `kubectl logs -f deployment/coordinator`
|
||||
3. Check database connectivity
|
||||
4. Verify Redis connection
|
||||
5. Scale if needed: `kubectl scale deployment coordinator --replicas=5`
|
||||
|
||||
#### Service Unavailable
|
||||
**Symptoms**: 503 errors, health check failures
|
||||
**Runbook**:
|
||||
1. Check deployment status
|
||||
2. Review recent deployments
|
||||
3. Rollback if necessary
|
||||
4. Check resource limits
|
||||
5. Verify ingress configuration
|
||||
|
||||
### Blockchain Nodes
|
||||
|
||||
#### Consensus Stalled
|
||||
**Symptoms**: No new blocks, high finality latency
|
||||
**Runbook**:
|
||||
1. Check node sync status
|
||||
2. Verify network connectivity
|
||||
3. Review validator set
|
||||
4. Check governance proposals
|
||||
5. Restart if needed (with caution)
|
||||
|
||||
#### High Peer Drop Rate
|
||||
**Symptoms**: Connected peers <50%, network partition
|
||||
**Runbook**:
|
||||
1. Check network policies
|
||||
2. Verify DNS resolution
|
||||
3. Review firewall rules
|
||||
4. Check load balancer health
|
||||
5. Restart networking components
|
||||
|
||||
### Database (PostgreSQL)
|
||||
|
||||
#### Connection Exhaustion
|
||||
**Symptoms**: "Too many connections" errors
|
||||
**Runbook**:
|
||||
1. Check active connections
|
||||
2. Identify long-running queries
|
||||
3. Kill idle connections
|
||||
4. Increase pool size if needed
|
||||
5. Scale database
|
||||
|
||||
#### Replica Lag
|
||||
**Symptoms**: Read replica lag >10 seconds
|
||||
**Runbook**:
|
||||
1. Check replica status
|
||||
2. Review network latency
|
||||
3. Verify disk space
|
||||
4. Restart replication if needed
|
||||
5. Failover if necessary
|
||||
|
||||
### Redis
|
||||
|
||||
#### Memory Pressure
|
||||
**Symptoms**: OOM errors, high eviction rate
|
||||
**Runbook**:
|
||||
1. Check memory usage
|
||||
2. Review key expiration
|
||||
3. Clean up unused keys
|
||||
4. Scale Redis cluster
|
||||
5. Optimize data structures
|
||||
|
||||
#### Connection Issues
|
||||
**Symptoms**: Connection timeouts, errors
|
||||
**Runbook**:
|
||||
1. Check max connections
|
||||
2. Review connection pool
|
||||
3. Verify network policies
|
||||
4. Restart Redis if needed
|
||||
5. Scale horizontally
|
||||
|
||||
## Monitoring Dashboards
|
||||
|
||||
### Primary Dashboards
|
||||
|
||||
#### 1. System Overview
|
||||
- Service health status
|
||||
- Error rates (4xx/5xx)
|
||||
- Response times
|
||||
- Throughput metrics
|
||||
- Resource utilization
|
||||
|
||||
#### 2. Infrastructure
|
||||
- Kubernetes cluster health
|
||||
- Node resource usage
|
||||
- Pod status and restarts
|
||||
- Network traffic
|
||||
- Storage capacity
|
||||
|
||||
#### 3. Application Metrics
|
||||
- Job submission rates
|
||||
- Transaction processing
|
||||
- Marketplace activity
|
||||
- Wallet operations
|
||||
- Mining statistics
|
||||
|
||||
#### 4. Business KPIs
|
||||
- Active users
|
||||
- Transaction volume
|
||||
- Revenue metrics
|
||||
- Customer satisfaction
|
||||
- SLA compliance
|
||||
|
||||
### Alert Rules
|
||||
|
||||
#### Critical Alerts
|
||||
- Service down >1 minute
|
||||
- Error rate >10%
|
||||
- Response time >1 second
|
||||
- Disk space >90%
|
||||
- Memory usage >95%
|
||||
|
||||
#### Warning Alerts
|
||||
- Error rate >5%
|
||||
- Response time >500ms
|
||||
- CPU usage >80%
|
||||
- Queue depth >1000
|
||||
- Replica lag >5s
|
||||
|
||||
## SLOs & SLIs
|
||||
|
||||
### Service Level Objectives
|
||||
|
||||
| Service | Metric | Target | Measurement |
|
||||
|---------|--------|--------|-------------|
|
||||
| Coordinator API | Availability | 99.9% | 30-day rolling |
|
||||
| Coordinator API | Latency | <200ms | 95th percentile |
|
||||
| Blockchain | Block Time | <2s | 24-hour average |
|
||||
| Marketplace | Success Rate | 99.5% | Daily |
|
||||
| Explorer | Response Time | <500ms | 95th percentile |
|
||||
|
||||
### Service Level Indicators
|
||||
|
||||
#### Availability
|
||||
- HTTP status codes
|
||||
- Health check responses
|
||||
- Pod readiness status
|
||||
|
||||
#### Latency
|
||||
- Request duration histogram
|
||||
- Database query times
|
||||
- External API calls
|
||||
|
||||
#### Throughput
|
||||
- Requests per second
|
||||
- Transactions per block
|
||||
- Jobs completed per hour
|
||||
|
||||
#### Quality
|
||||
- Error rates
|
||||
- Success rates
|
||||
- Customer satisfaction
|
||||
|
||||
## Post-Incident Process
|
||||
|
||||
### Immediate Actions (0-1 hour)
|
||||
1. Verify full resolution
|
||||
2. Monitor for recurrence
|
||||
3. Update status page
|
||||
4. Notify stakeholders
|
||||
|
||||
### Post-Mortem (1-24 hours)
|
||||
1. Create incident document
|
||||
2. Gather timeline and logs
|
||||
3. Identify root cause
|
||||
4. Document lessons learned
|
||||
|
||||
### Follow-up (1-7 days)
|
||||
1. Schedule post-mortem meeting
|
||||
2. Assign action items
|
||||
3. Update runbooks
|
||||
4. Improve monitoring
|
||||
|
||||
### Review (Weekly)
|
||||
1. Review incident trends
|
||||
2. Update SLOs if needed
|
||||
3. Adjust alerting thresholds
|
||||
4. Improve processes
|
||||
|
||||
## Maintenance Windows
|
||||
|
||||
### Scheduled Maintenance
|
||||
- **Frequency**: Weekly maintenance window
|
||||
- **Time**: Sunday 02:00-04:00 UTC
|
||||
- **Duration**: Maximum 2 hours
|
||||
- **Notification**: 72 hours advance
|
||||
|
||||
### Emergency Maintenance
|
||||
- **Approval**: Engineering Manager required
|
||||
- **Notification**: 4 hours advance (if possible)
|
||||
- **Duration**: As needed
|
||||
- **Rollback**: Always required
|
||||
|
||||
## Tools & Systems
|
||||
|
||||
### Monitoring Stack
|
||||
- **Prometheus**: Metrics collection
|
||||
- **Grafana**: Visualization and dashboards
|
||||
- **Alertmanager**: Alert routing and management
|
||||
- **PagerDuty**: On-call scheduling and escalation
|
||||
|
||||
### Observability
|
||||
- **Jaeger**: Distributed tracing
|
||||
- **Loki**: Log aggregation
|
||||
- **Kiali**: Service mesh visualization
|
||||
- **Kube-state-metrics**: Kubernetes metrics
|
||||
|
||||
### Communication
|
||||
- **Slack**: Primary communication
|
||||
- **Zoom**: War room meetings
|
||||
- **Status Page**: Customer notifications
|
||||
- **Email**: Formal communications
|
||||
|
||||
## Training & Onboarding
|
||||
|
||||
### New On-Call Engineer
|
||||
1. Shadow primary for 1 week
|
||||
2. Review all runbooks
|
||||
3. Test alerting systems
|
||||
4. Handle low-severity incidents
|
||||
5. Solo on-call with mentor
|
||||
|
||||
### Ongoing Training
|
||||
- Monthly incident drills
|
||||
- Quarterly runbook updates
|
||||
- Annual training refreshers
|
||||
- Cross-team knowledge sharing
|
||||
|
||||
## Emergency Procedures
|
||||
|
||||
### Major Outage
|
||||
1. Declare incident (SEV-0)
|
||||
2. Activate war room
|
||||
3. Customer communication
|
||||
4. Executive updates
|
||||
5. Recovery coordination
|
||||
|
||||
### Security Incident
|
||||
1. Isolate affected systems
|
||||
2. Preserve evidence
|
||||
3. Notify security team
|
||||
4. Customer notification
|
||||
5. Regulatory compliance
|
||||
|
||||
### Data Loss
|
||||
1. Stop affected services
|
||||
2. Assess impact
|
||||
3. Initiate recovery
|
||||
4. Customer communication
|
||||
5. Prevent recurrence
|
||||
|
||||
## Appendix
|
||||
|
||||
### A. Contact List
|
||||
[Detailed contact information]
|
||||
|
||||
### B. Runbook Checklist
|
||||
[Quick reference checklists]
|
||||
|
||||
### C. Alert Configuration
|
||||
[Prometheus rules and thresholds]
|
||||
|
||||
### D. Dashboard Links
|
||||
[Grafana dashboard URLs]
|
||||
|
||||
---
|
||||
|
||||
*Document Version: 1.0*
|
||||
*Last Updated: 2024-12-22*
|
||||
*Next Review: 2025-01-22*
|
||||
*Owner: SRE Team*
|
||||
340
docs/operator/security.md
Normal file
340
docs/operator/security.md
Normal file
@ -0,0 +1,340 @@
|
||||
# AITBC Security Documentation
|
||||
|
||||
This document outlines the security architecture, threat model, and implementation details for the AITBC platform.
|
||||
|
||||
## Overview
|
||||
|
||||
AITBC implements defense-in-depth security across multiple layers:
|
||||
- Network security with TLS termination
|
||||
- API authentication and authorization
|
||||
- Secrets management and encryption
|
||||
- Infrastructure security best practices
|
||||
- Monitoring and incident response
|
||||
|
||||
## Threat Model
|
||||
|
||||
### Threat Actors
|
||||
|
||||
| Actor | Motivation | Capabilities | Impact |
|
||||
|-------|-----------|--------------|--------|
|
||||
| External attacker | Financial gain, disruption | Network access, exploits | High |
|
||||
| Malicious insider | Data theft, sabotage | Internal access | Critical |
|
||||
| Competitor | IP theft, market manipulation | Sophisticated attacks | High |
|
||||
| Casual user | Accidental misuse | Limited knowledge | Low |
|
||||
|
||||
### Attack Vectors
|
||||
|
||||
1. **Network Attacks**
|
||||
- Man-in-the-middle (MITM) attacks
|
||||
- DDoS attacks
|
||||
- Network reconnaissance
|
||||
|
||||
2. **API Attacks**
|
||||
- Unauthorized access to marketplace
|
||||
- API key leakage
|
||||
- Rate limiting bypass
|
||||
- Injection attacks
|
||||
|
||||
3. **Infrastructure Attacks**
|
||||
- Container escape
|
||||
- Pod-to-pod attacks
|
||||
- Secrets exfiltration
|
||||
- Supply chain attacks
|
||||
|
||||
4. **Blockchain-Specific Attacks**
|
||||
- 51% attacks on consensus
|
||||
- Transaction replay attacks
|
||||
- Smart contract exploits
|
||||
- Miner collusion
|
||||
|
||||
### Security Controls
|
||||
|
||||
| Control | Implementation | Mitigates |
|
||||
|---------|----------------|-----------|
|
||||
| TLS 1.3 | cert-manager + ingress | MITM, eavesdropping |
|
||||
| API Keys | X-API-Key header | Unauthorized access |
|
||||
| Rate Limiting | slowapi middleware | DDoS, abuse |
|
||||
| Network Policies | Kubernetes NetworkPolicy | Pod-to-pod attacks |
|
||||
| Secrets Mgmt | Kubernetes Secrets + SealedSecrets | Secrets exfiltration |
|
||||
| RBAC | Kubernetes RBAC | Privilege escalation |
|
||||
| Monitoring | Prometheus + AlertManager | Incident detection |
|
||||
|
||||
## Security Architecture
|
||||
|
||||
### Network Security
|
||||
|
||||
#### TLS Termination
|
||||
```yaml
|
||||
# Ingress configuration with TLS
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
annotations:
|
||||
cert-manager.io/cluster-issuer: letsencrypt-prod
|
||||
nginx.ingress.kubernetes.io/ssl-protocols: "TLSv1.3"
|
||||
spec:
|
||||
tls:
|
||||
- hosts:
|
||||
- api.aitbc.io
|
||||
secretName: api-tls
|
||||
```
|
||||
|
||||
#### Certificate Management
|
||||
- Uses cert-manager for automatic certificate provisioning
|
||||
- Supports Let's Encrypt for production
|
||||
- Internal CA for development environments
|
||||
- Automatic renewal 30 days before expiry
|
||||
|
||||
### API Security
|
||||
|
||||
#### Authentication
|
||||
- API key-based authentication for all services
|
||||
- Keys stored in Kubernetes Secrets
|
||||
- Per-service key rotation policies
|
||||
- Audit logging for all authenticated requests
|
||||
|
||||
#### Authorization
|
||||
- Role-based access control (RBAC)
|
||||
- Resource-level permissions
|
||||
- Rate limiting per API key
|
||||
- IP whitelisting for sensitive operations
|
||||
|
||||
#### API Key Format
|
||||
```
|
||||
Header: X-API-Key: aitbc_prod_ak_1a2b3c4d5e6f7g8h9i0j
|
||||
```
|
||||
|
||||
### Secrets Management
|
||||
|
||||
#### Kubernetes Secrets
|
||||
- Base64 encoded secrets (not encrypted by default)
|
||||
- Encrypted at rest with etcd encryption
|
||||
- Access controlled via RBAC
|
||||
|
||||
#### SealedSecrets (Recommended for Production)
|
||||
- Client-side encryption of secrets
|
||||
- GitOps friendly
|
||||
- Zero-knowledge encryption
|
||||
|
||||
#### Secret Rotation
|
||||
- Automated rotation every 90 days
|
||||
- Zero-downtime rotation for services
|
||||
- Audit trail of all rotations
|
||||
|
||||
## Implementation Details
|
||||
|
||||
### 1. TLS Configuration
|
||||
|
||||
#### Coordinator API
|
||||
```yaml
|
||||
# Helm values for coordinator
|
||||
ingress:
|
||||
enabled: true
|
||||
annotations:
|
||||
cert-manager.io/cluster-issuer: letsencrypt-prod
|
||||
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
|
||||
nginx.ingress.kubernetes.io/ssl-protocols: "TLSv1.3"
|
||||
tls:
|
||||
- secretName: coordinator-tls
|
||||
hosts:
|
||||
- api.aitbc.io
|
||||
```
|
||||
|
||||
#### Blockchain Node RPC
|
||||
```yaml
|
||||
# WebSocket with TLS
|
||||
wss://api.aitbc.io:8080/ws
|
||||
```
|
||||
|
||||
### 2. API Authentication Middleware
|
||||
|
||||
#### Coordinator API Implementation
|
||||
```python
|
||||
from fastapi import Security, HTTPException
|
||||
from fastapi.security import APIKeyHeader
|
||||
|
||||
api_key_header = APIKeyHeader(name="X-API-Key", auto_error=True)
|
||||
|
||||
async def verify_api_key(api_key: str = Security(api_key_header)):
|
||||
if not verify_key(api_key):
|
||||
raise HTTPException(status_code=403, detail="Invalid API key")
|
||||
return api_key
|
||||
|
||||
@app.middleware("http")
|
||||
async def auth_middleware(request: Request, call_next):
|
||||
if request.url.path.startswith("/v1/"):
|
||||
api_key = request.headers.get("X-API-Key")
|
||||
if not verify_key(api_key):
|
||||
raise HTTPException(status_code=403, detail="API key required")
|
||||
response = await call_next(request)
|
||||
return response
|
||||
```
|
||||
|
||||
### 3. Secrets Management Setup
|
||||
|
||||
#### SealedSecrets Installation
|
||||
```bash
|
||||
# Install sealed-secrets controller
|
||||
helm repo add sealed-secrets https://bitnami-labs.github.io/sealed-secrets
|
||||
helm install sealed-secrets sealed-secrets/sealed-secrets -n kube-system
|
||||
|
||||
# Create a sealed secret
|
||||
kubeseal --format yaml < secret.yaml > sealed-secret.yaml
|
||||
```
|
||||
|
||||
#### Example Secret Structure
|
||||
```yaml
|
||||
apiVersion: bitnami.com/v1alpha1
|
||||
kind: SealedSecret
|
||||
metadata:
|
||||
name: coordinator-api-keys
|
||||
spec:
|
||||
encryptedData:
|
||||
api-key-prod: AgBy3i4OJSWK+PiTySYZZA9rO43cGDEQAx...
|
||||
api-key-dev: AgBy3i4OJSWK+PiTySYZZA9rO43cGDEQAx...
|
||||
```
|
||||
|
||||
### 4. Network Policies
|
||||
|
||||
#### Default Deny Policy
|
||||
```yaml
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: default-deny-all
|
||||
spec:
|
||||
podSelector: {}
|
||||
policyTypes:
|
||||
- Ingress
|
||||
- Egress
|
||||
```
|
||||
|
||||
#### Service-Specific Policies
|
||||
```yaml
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: coordinator-api-netpol
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
app: coordinator-api
|
||||
policyTypes:
|
||||
- Ingress
|
||||
- Egress
|
||||
ingress:
|
||||
- from:
|
||||
- podSelector:
|
||||
matchLabels:
|
||||
app: ingress-nginx
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 8011
|
||||
```
|
||||
|
||||
## Security Best Practices
|
||||
|
||||
### Development Environment
|
||||
- Use 127.0.0.2 for local development (not 0.0.0.0)
|
||||
- Separate API keys for dev/staging/prod
|
||||
- Enable debug logging only in development
|
||||
- Use self-signed certificates for local TLS
|
||||
|
||||
### Production Environment
|
||||
- Enable all security headers
|
||||
- Implement comprehensive logging
|
||||
- Use external secret management
|
||||
- Regular security audits
|
||||
- Penetration testing quarterly
|
||||
|
||||
### Monitoring and Alerting
|
||||
|
||||
#### Security Metrics
|
||||
- Failed authentication attempts
|
||||
- Unusual API usage patterns
|
||||
- Certificate expiry warnings
|
||||
- Secret access audits
|
||||
|
||||
#### Alert Rules
|
||||
```yaml
|
||||
- alert: HighAuthFailureRate
|
||||
expr: rate(auth_failures_total[5m]) > 10
|
||||
for: 2m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
summary: "High authentication failure rate detected"
|
||||
|
||||
- alert: CertificateExpiringSoon
|
||||
expr: cert_certificate_expiry_time < time() + 86400 * 7
|
||||
for: 1h
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
summary: "Certificate expires in less than 7 days"
|
||||
```
|
||||
|
||||
## Incident Response
|
||||
|
||||
### Security Incident Categories
|
||||
1. **Critical**: Data breach, system compromise
|
||||
2. **High**: Service disruption, privilege escalation
|
||||
3. **Medium**: Suspicious activity, policy violation
|
||||
4. **Low**: Misconfiguration, minor issue
|
||||
|
||||
### Response Procedures
|
||||
1. **Detection**: Automated alerts, manual monitoring
|
||||
2. **Assessment**: Impact analysis, containment
|
||||
3. **Remediation**: Patch, rotate credentials, restore
|
||||
4. **Post-mortem**: Document, improve controls
|
||||
|
||||
### Emergency Contacts
|
||||
- Security Team: security@aitbc.io
|
||||
- On-call Engineer: +1-555-SECURITY
|
||||
- Incident Commander: incident@aitbc.io
|
||||
|
||||
## Compliance
|
||||
|
||||
### Data Protection
|
||||
- GDPR compliance for EU users
|
||||
- CCPA compliance for California users
|
||||
- Data retention policies
|
||||
- Right to deletion implementation
|
||||
|
||||
### Auditing
|
||||
- Quarterly security audits
|
||||
- Annual penetration testing
|
||||
- Continuous vulnerability scanning
|
||||
- Third-party security assessments
|
||||
|
||||
## Security Checklist
|
||||
|
||||
### Pre-deployment
|
||||
- [ ] All API endpoints require authentication
|
||||
- [ ] TLS certificates valid and properly configured
|
||||
- [ ] Secrets encrypted and access-controlled
|
||||
- [ ] Network policies implemented
|
||||
- [ ] RBAC configured correctly
|
||||
- [ ] Monitoring and alerting active
|
||||
- [ ] Backup encryption enabled
|
||||
- [ ] Security headers configured
|
||||
|
||||
### Post-deployment
|
||||
- [ ] Security testing completed
|
||||
- [ ] Documentation updated
|
||||
- [ ] Team trained on procedures
|
||||
- [ ] Incident response tested
|
||||
- [ ] Compliance verified
|
||||
|
||||
## References
|
||||
|
||||
- [OWASP API Security Top 10](https://owasp.org/www-project-api-security/)
|
||||
- [Kubernetes Security Best Practices](https://kubernetes.io/docs/concepts/security/)
|
||||
- [NIST Cybersecurity Framework](https://www.nist.gov/cyberframework)
|
||||
- [CERT Coordination Center](https://www.cert.org/)
|
||||
|
||||
## Security Updates
|
||||
|
||||
This document is updated regularly. Last updated: 2024-12-22
|
||||
|
||||
For questions or concerns, contact the security team at security@aitbc.io
|
||||
@ -1,8 +1,9 @@
|
||||
# Pool Hub – Task Breakdown
|
||||
|
||||
## Status (2025-09-27)
|
||||
## Status (2025-12-22)
|
||||
|
||||
- **Stage 1**: Service still in design phase. Coordinator API and miner telemetry improvements will feed into pool hub scoring once implementation starts.
|
||||
- **Stage 1**: FastAPI service implemented with miner registry, scoring engine, and Redis/PostgreSQL backing stores. Service configuration API and UI added for GPU providers to select which services to offer.
|
||||
- **Service Configuration**: Implemented dynamic service configuration allowing miners to enable/disable specific GPU services, set pricing, and define capabilities.
|
||||
|
||||
## Stage 1 (MVP)
|
||||
|
||||
@ -25,6 +26,16 @@
|
||||
- `POST /v1/match` returning top K candidates for coordinator requests with explain string.
|
||||
- `POST /v1/feedback` to adjust trust and metrics.
|
||||
- `GET /v1/health` and `GET /v1/metrics` for observability.
|
||||
- Service Configuration endpoints:
|
||||
- `GET /v1/services/` - List all service configurations for miner
|
||||
- `GET /v1/services/{type}` - Get specific service configuration
|
||||
- `POST /v1/services/{type}` - Create/update service configuration
|
||||
- `PATCH /v1/services/{type}` - Partial update
|
||||
- `DELETE /v1/services/{type}` - Delete configuration
|
||||
- `GET /v1/services/templates/{type}` - Get default templates
|
||||
- `POST /v1/services/validate/{type}` - Validate against hardware
|
||||
- UI endpoint:
|
||||
- `GET /services` - Service configuration web interface
|
||||
- Optional admin listing endpoint guarded by shared secret.
|
||||
|
||||
- **Rate Limiting & Security**
|
||||
|
||||
403
docs/reference/architecture/cross-chain-settlement-design.md
Normal file
403
docs/reference/architecture/cross-chain-settlement-design.md
Normal file
@ -0,0 +1,403 @@
|
||||
# Cross-Chain Settlement Hooks Design
|
||||
|
||||
## Overview
|
||||
|
||||
This document outlines the architecture for cross-chain settlement hooks in AITBC, enabling job receipts and proofs to be settled across multiple blockchains using various bridge protocols.
|
||||
|
||||
## Architecture
|
||||
|
||||
### Core Components
|
||||
|
||||
```
|
||||
┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
|
||||
│ AITBC Chain │ │ Settlement Hooks │ │ Target Chains │
|
||||
│ │ │ │ │ │
|
||||
│ - Job Receipts │───▶│ - Bridge Manager │───▶│ - Ethereum │
|
||||
│ - Proofs │ │ - Adapters │ │ - Polygon │
|
||||
│ - Payments │ │ - Router │ │ - BSC │
|
||||
│ │ │ - Validator │ │ - Arbitrum │
|
||||
└─────────────────┘ └──────────────────┘ └─────────────────┘
|
||||
```
|
||||
|
||||
### Settlement Hook Interface
|
||||
|
||||
```python
|
||||
from abc import ABC, abstractmethod
|
||||
from typing import Dict, Any, List
|
||||
from dataclasses import dataclass
|
||||
|
||||
@dataclass
|
||||
class SettlementMessage:
|
||||
"""Message to be settled across chains"""
|
||||
source_chain_id: int
|
||||
target_chain_id: int
|
||||
job_id: str
|
||||
receipt_hash: str
|
||||
proof_data: Dict[str, Any]
|
||||
payment_amount: int
|
||||
payment_token: str
|
||||
nonce: int
|
||||
signature: str
|
||||
|
||||
class BridgeAdapter(ABC):
|
||||
"""Abstract interface for bridge adapters"""
|
||||
|
||||
@abstractmethod
|
||||
async def send_message(self, message: SettlementMessage) -> str:
|
||||
"""Send message to target chain"""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
async def verify_delivery(self, message_id: str) -> bool:
|
||||
"""Verify message was delivered"""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
async def estimate_cost(self, message: SettlementMessage) -> Dict[str, int]:
|
||||
"""Estimate bridge fees"""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def get_supported_chains(self) -> List[int]:
|
||||
"""Get list of supported target chains"""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def get_max_message_size(self) -> int:
|
||||
"""Get maximum message size in bytes"""
|
||||
pass
|
||||
```
|
||||
|
||||
### Bridge Manager
|
||||
|
||||
```python
|
||||
class BridgeManager:
|
||||
"""Manages multiple bridge adapters"""
|
||||
|
||||
def __init__(self):
|
||||
self.adapters: Dict[str, BridgeAdapter] = {}
|
||||
self.default_adapter: str = None
|
||||
|
||||
def register_adapter(self, name: str, adapter: BridgeAdapter):
|
||||
"""Register a bridge adapter"""
|
||||
self.adapters[name] = adapter
|
||||
|
||||
async def settle_cross_chain(
|
||||
self,
|
||||
message: SettlementMessage,
|
||||
bridge_name: str = None
|
||||
) -> str:
|
||||
"""Settle message across chains"""
|
||||
adapter = self._get_adapter(bridge_name)
|
||||
|
||||
# Validate message
|
||||
self._validate_message(message, adapter)
|
||||
|
||||
# Send message
|
||||
message_id = await adapter.send_message(message)
|
||||
|
||||
# Store settlement record
|
||||
await self._store_settlement(message_id, message)
|
||||
|
||||
return message_id
|
||||
|
||||
def _get_adapter(self, bridge_name: str = None) -> BridgeAdapter:
|
||||
"""Get bridge adapter"""
|
||||
if bridge_name:
|
||||
return self.adapters[bridge_name]
|
||||
return self.adapters[self.default_adapter]
|
||||
```
|
||||
|
||||
## Bridge Implementations
|
||||
|
||||
### 1. LayerZero Adapter
|
||||
|
||||
```python
|
||||
class LayerZeroAdapter(BridgeAdapter):
|
||||
"""LayerZero bridge adapter"""
|
||||
|
||||
def __init__(self, endpoint_address: str, chain_id: int):
|
||||
self.endpoint = endpoint_address
|
||||
self.chain_id = chain_id
|
||||
self.contract = self._load_contract()
|
||||
|
||||
async def send_message(self, message: SettlementMessage) -> str:
|
||||
"""Send via LayerZero"""
|
||||
# Encode settlement data
|
||||
payload = self._encode_payload(message)
|
||||
|
||||
# Estimate fees
|
||||
fees = await self._estimate_fees(message)
|
||||
|
||||
# Send transaction
|
||||
tx = await self.contract.send(
|
||||
message.target_chain_id,
|
||||
self._get_target_address(message.target_chain_id),
|
||||
payload,
|
||||
message.payment_amount,
|
||||
message.payment_token,
|
||||
fees
|
||||
)
|
||||
|
||||
return tx.hash
|
||||
|
||||
def _encode_payload(self, message: SettlementMessage) -> bytes:
|
||||
"""Encode message for LayerZero"""
|
||||
return abi.encode(
|
||||
['uint256', 'bytes32', 'bytes', 'uint256', 'address'],
|
||||
[
|
||||
message.job_id,
|
||||
message.receipt_hash,
|
||||
json.dumps(message.proof_data),
|
||||
message.payment_amount,
|
||||
message.payment_token
|
||||
]
|
||||
)
|
||||
```
|
||||
|
||||
### 2. Chainlink CCIP Adapter
|
||||
|
||||
```python
|
||||
class ChainlinkCCIPAdapter(BridgeAdapter):
|
||||
"""Chainlink CCIP bridge adapter"""
|
||||
|
||||
def __init__(self, router_address: str, chain_id: int):
|
||||
self.router = router_address
|
||||
self.chain_id = chain_id
|
||||
self.contract = self._load_contract()
|
||||
|
||||
async def send_message(self, message: SettlementMessage) -> str:
|
||||
"""Send via Chainlink CCIP"""
|
||||
# Create CCIP message
|
||||
ccip_message = {
|
||||
'receiver': self._get_target_address(message.target_chain_id),
|
||||
'data': self._encode_payload(message),
|
||||
'tokenAmounts': [{
|
||||
'token': message.payment_token,
|
||||
'amount': message.payment_amount
|
||||
}]
|
||||
}
|
||||
|
||||
# Estimate fees
|
||||
fees = await self.contract.getFee(ccip_message)
|
||||
|
||||
# Send transaction
|
||||
tx = await self.contract.ccipSend(ccip_message, {'value': fees})
|
||||
|
||||
return tx.hash
|
||||
```
|
||||
|
||||
### 3. Wormhole Adapter
|
||||
|
||||
```python
|
||||
class WormholeAdapter(BridgeAdapter):
|
||||
"""Wormhole bridge adapter"""
|
||||
|
||||
def __init__(self, bridge_address: str, chain_id: int):
|
||||
self.bridge = bridge_address
|
||||
self.chain_id = chain_id
|
||||
self.contract = self._load_contract()
|
||||
|
||||
async def send_message(self, message: SettlementMessage) -> str:
|
||||
"""Send via Wormhole"""
|
||||
# Encode payload
|
||||
payload = self._encode_payload(message)
|
||||
|
||||
# Send transaction
|
||||
tx = await self.contract.publishMessage(
|
||||
message.nonce,
|
||||
payload,
|
||||
message.payment_amount
|
||||
)
|
||||
|
||||
return tx.hash
|
||||
```
|
||||
|
||||
## Integration with Coordinator
|
||||
|
||||
### Settlement Hook in Coordinator
|
||||
|
||||
```python
|
||||
class SettlementHook:
|
||||
"""Settlement hook for coordinator"""
|
||||
|
||||
def __init__(self, bridge_manager: BridgeManager):
|
||||
self.bridge_manager = bridge_manager
|
||||
|
||||
async def on_job_completed(self, job: Job) -> None:
|
||||
"""Called when job completes"""
|
||||
# Check if cross-chain settlement needed
|
||||
if job.requires_cross_chain_settlement:
|
||||
await self._settle_cross_chain(job)
|
||||
|
||||
async def _settle_cross_chain(self, job: Job) -> None:
|
||||
"""Settle job across chains"""
|
||||
# Create settlement message
|
||||
message = SettlementMessage(
|
||||
source_chain_id=await self._get_chain_id(),
|
||||
target_chain_id=job.target_chain,
|
||||
job_id=job.id,
|
||||
receipt_hash=job.receipt.hash,
|
||||
proof_data=job.receipt.proof,
|
||||
payment_amount=job.payment_amount,
|
||||
payment_token=job.payment_token,
|
||||
nonce=await self._get_nonce(),
|
||||
signature=await self._sign_message(job)
|
||||
)
|
||||
|
||||
# Send via appropriate bridge
|
||||
await self.bridge_manager.settle_cross_chain(
|
||||
message,
|
||||
bridge_name=job.preferred_bridge
|
||||
)
|
||||
```
|
||||
|
||||
### Coordinator API Endpoints
|
||||
|
||||
```python
|
||||
@app.post("/v1/settlement/cross-chain")
|
||||
async def initiate_cross_chain_settlement(
|
||||
request: CrossChainSettlementRequest
|
||||
):
|
||||
"""Initiate cross-chain settlement"""
|
||||
job = await get_job(request.job_id)
|
||||
|
||||
if not job.completed:
|
||||
raise HTTPException(400, "Job not completed")
|
||||
|
||||
# Create settlement message
|
||||
message = SettlementMessage(
|
||||
source_chain_id=request.source_chain,
|
||||
target_chain_id=request.target_chain,
|
||||
job_id=job.id,
|
||||
receipt_hash=job.receipt.hash,
|
||||
proof_data=job.receipt.proof,
|
||||
payment_amount=request.amount,
|
||||
payment_token=request.token,
|
||||
nonce=await generate_nonce(),
|
||||
signature=await sign_settlement(job, request)
|
||||
)
|
||||
|
||||
# Send settlement
|
||||
message_id = await settlement_hook.settle_cross_chain(message)
|
||||
|
||||
return {"message_id": message_id, "status": "pending"}
|
||||
|
||||
@app.get("/v1/settlement/{message_id}/status")
|
||||
async def get_settlement_status(message_id: str):
|
||||
"""Get settlement status"""
|
||||
status = await bridge_manager.get_settlement_status(message_id)
|
||||
return status
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
### Bridge Configuration
|
||||
|
||||
```yaml
|
||||
bridges:
|
||||
layerzero:
|
||||
enabled: true
|
||||
endpoint_address: "0x..."
|
||||
supported_chains: [1, 137, 56, 42161]
|
||||
default_fee: "0.001"
|
||||
|
||||
chainlink_ccip:
|
||||
enabled: true
|
||||
router_address: "0x..."
|
||||
supported_chains: [1, 137, 56, 42161]
|
||||
default_fee: "0.002"
|
||||
|
||||
wormhole:
|
||||
enabled: false
|
||||
bridge_address: "0x..."
|
||||
supported_chains: [1, 137, 56]
|
||||
default_fee: "0.0015"
|
||||
|
||||
settlement:
|
||||
default_bridge: "layerzero"
|
||||
max_retries: 3
|
||||
retry_delay: 30
|
||||
timeout: 3600
|
||||
```
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### Message Validation
|
||||
- Verify signatures on all settlement messages
|
||||
- Validate chain IDs and addresses
|
||||
- Check message size limits
|
||||
- Prevent replay attacks with nonces
|
||||
|
||||
### Bridge Security
|
||||
- Use reputable audited bridge contracts
|
||||
- Implement bridge-specific security checks
|
||||
- Monitor for bridge vulnerabilities
|
||||
- Have fallback mechanisms
|
||||
|
||||
### Economic Security
|
||||
- Validate payment amounts
|
||||
- Check token allowances
|
||||
- Implement fee limits
|
||||
- Monitor for economic attacks
|
||||
|
||||
## Monitoring
|
||||
|
||||
### Metrics to Track
|
||||
- Settlement success rate per bridge
|
||||
- Average settlement time
|
||||
- Cost per settlement
|
||||
- Failed settlement reasons
|
||||
- Bridge health status
|
||||
|
||||
### Alerts
|
||||
- Settlement failures
|
||||
- High settlement costs
|
||||
- Bridge downtime
|
||||
- Unusual settlement patterns
|
||||
|
||||
## Testing
|
||||
|
||||
### Test Scenarios
|
||||
1. **Happy Path**: Successful settlement across chains
|
||||
2. **Bridge Failure**: Handle bridge unavailability
|
||||
3. **Message Too Large**: Handle size limits
|
||||
4. **Insufficient Funds**: Handle payment failures
|
||||
5. **Replay Attack**: Prevent duplicate settlements
|
||||
|
||||
### Test Networks
|
||||
- Ethereum Sepolia
|
||||
- Polygon Mumbai
|
||||
- BSC Testnet
|
||||
- Arbitrum Goerli
|
||||
|
||||
## Migration Path
|
||||
|
||||
### Phase 1: Single Bridge
|
||||
- Implement LayerZero adapter
|
||||
- Basic settlement functionality
|
||||
- Test on testnets
|
||||
|
||||
### Phase 2: Multiple Bridges
|
||||
- Add Chainlink CCIP
|
||||
- Implement bridge selection logic
|
||||
- Add cost optimization
|
||||
|
||||
### Phase 3: Advanced Features
|
||||
- Add Wormhole support
|
||||
- Implement atomic settlements
|
||||
- Add settlement routing
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
1. **Atomic Settlements**: Ensure all-or-nothing settlements
|
||||
2. **Settlement Routing**: Automatically select optimal bridge
|
||||
3. **Batch Settlements**: Settle multiple jobs together
|
||||
4. **Cross-Chain Governance**: Governance across chains
|
||||
5. **Privacy Features**: Confidential settlements
|
||||
|
||||
---
|
||||
|
||||
*Document Version: 1.0*
|
||||
*Last Updated: 2025-01-10*
|
||||
*Owner: Core Protocol Team*
|
||||
618
docs/reference/architecture/python-sdk-transport-design.md
Normal file
618
docs/reference/architecture/python-sdk-transport-design.md
Normal file
@ -0,0 +1,618 @@
|
||||
# Python SDK Transport Abstraction Design
|
||||
|
||||
## Overview
|
||||
|
||||
This document outlines the design for a pluggable transport abstraction layer in the AITBC Python SDK, enabling support for multiple networks and cross-chain operations.
|
||||
|
||||
## Architecture
|
||||
|
||||
### Current SDK Structure
|
||||
```
|
||||
AITBCClient
|
||||
├── Jobs API
|
||||
├── Marketplace API
|
||||
├── Wallet API
|
||||
├── Receipts API
|
||||
└── Direct HTTP calls to coordinator
|
||||
```
|
||||
|
||||
### Proposed Transport-Based Structure
|
||||
```
|
||||
AITBCClient
|
||||
├── Transport Layer (Pluggable)
|
||||
│ ├── HTTPTransport
|
||||
│ ├── WebSocketTransport
|
||||
│ └── CrossChainTransport
|
||||
├── Jobs API
|
||||
├── Marketplace API
|
||||
├── Wallet API
|
||||
├── Receipts API
|
||||
└── Settlement API (New)
|
||||
```
|
||||
|
||||
## Transport Interface
|
||||
|
||||
### Base Transport Class
|
||||
|
||||
```python
|
||||
from abc import ABC, abstractmethod
|
||||
from typing import Dict, Any, Optional, Union
|
||||
import asyncio
|
||||
|
||||
class Transport(ABC):
|
||||
"""Abstract base class for all transports"""
|
||||
|
||||
def __init__(self, config: Dict[str, Any]):
|
||||
self.config = config
|
||||
self._connected = False
|
||||
|
||||
@abstractmethod
|
||||
async def connect(self) -> None:
|
||||
"""Establish connection"""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
async def disconnect(self) -> None:
|
||||
"""Close connection"""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
async def request(
|
||||
self,
|
||||
method: str,
|
||||
path: str,
|
||||
data: Optional[Dict[str, Any]] = None,
|
||||
params: Optional[Dict[str, Any]] = None,
|
||||
headers: Optional[Dict[str, str]] = None
|
||||
) -> Dict[str, Any]:
|
||||
"""Make a request"""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
async def stream(
|
||||
self,
|
||||
method: str,
|
||||
path: str,
|
||||
data: Optional[Dict[str, Any]] = None
|
||||
) -> AsyncIterator[Dict[str, Any]]:
|
||||
"""Stream responses"""
|
||||
pass
|
||||
|
||||
@property
|
||||
def is_connected(self) -> bool:
|
||||
"""Check if transport is connected"""
|
||||
return self._connected
|
||||
|
||||
@property
|
||||
def chain_id(self) -> Optional[int]:
|
||||
"""Get the chain ID this transport is connected to"""
|
||||
return self.config.get('chain_id')
|
||||
```
|
||||
|
||||
### HTTP Transport Implementation
|
||||
|
||||
```python
|
||||
import aiohttp
|
||||
from typing import AsyncIterator
|
||||
|
||||
class HTTPTransport(Transport):
|
||||
"""HTTP transport for REST API calls"""
|
||||
|
||||
def __init__(self, config: Dict[str, Any]):
|
||||
super().__init__(config)
|
||||
self.base_url = config['base_url']
|
||||
self.session: Optional[aiohttp.ClientSession] = None
|
||||
self.timeout = config.get('timeout', 30)
|
||||
|
||||
async def connect(self) -> None:
|
||||
"""Create HTTP session"""
|
||||
connector = aiohttp.TCPConnector(
|
||||
limit=100,
|
||||
limit_per_host=30,
|
||||
ttl_dns_cache=300,
|
||||
use_dns_cache=True,
|
||||
)
|
||||
|
||||
timeout = aiohttp.ClientTimeout(total=self.timeout)
|
||||
self.session = aiohttp.ClientSession(
|
||||
connector=connector,
|
||||
timeout=timeout,
|
||||
headers=self.config.get('default_headers', {})
|
||||
)
|
||||
self._connected = True
|
||||
|
||||
async def disconnect(self) -> None:
|
||||
"""Close HTTP session"""
|
||||
if self.session:
|
||||
await self.session.close()
|
||||
self.session = None
|
||||
self._connected = False
|
||||
|
||||
async def request(
|
||||
self,
|
||||
method: str,
|
||||
path: str,
|
||||
data: Optional[Dict[str, Any]] = None,
|
||||
params: Optional[Dict[str, Any]] = None,
|
||||
headers: Optional[Dict[str, str]] = None
|
||||
) -> Dict[str, Any]:
|
||||
"""Make HTTP request"""
|
||||
if not self.session:
|
||||
await self.connect()
|
||||
|
||||
url = f"{self.base_url}{path}"
|
||||
|
||||
async with self.session.request(
|
||||
method=method,
|
||||
url=url,
|
||||
json=data,
|
||||
params=params,
|
||||
headers=headers
|
||||
) as response:
|
||||
if response.status >= 400:
|
||||
error_data = await response.json()
|
||||
raise APIError(error_data.get('error', 'Unknown error'))
|
||||
|
||||
return await response.json()
|
||||
|
||||
async def stream(
|
||||
self,
|
||||
method: str,
|
||||
path: str,
|
||||
data: Optional[Dict[str, Any]] = None
|
||||
) -> AsyncIterator[Dict[str, Any]]:
|
||||
"""Stream HTTP responses (not supported for basic HTTP)"""
|
||||
raise NotImplementedError("HTTP transport does not support streaming")
|
||||
```
|
||||
|
||||
### WebSocket Transport Implementation
|
||||
|
||||
```python
|
||||
import websockets
|
||||
import json
|
||||
from typing import AsyncIterator
|
||||
|
||||
class WebSocketTransport(Transport):
|
||||
"""WebSocket transport for real-time updates"""
|
||||
|
||||
def __init__(self, config: Dict[str, Any]):
|
||||
super().__init__(config)
|
||||
self.ws_url = config['ws_url']
|
||||
self.websocket: Optional[websockets.WebSocketServerProtocol] = None
|
||||
self._subscriptions: Dict[str, Any] = {}
|
||||
|
||||
async def connect(self) -> None:
|
||||
"""Connect to WebSocket"""
|
||||
self.websocket = await websockets.connect(
|
||||
self.ws_url,
|
||||
extra_headers=self.config.get('headers', {})
|
||||
)
|
||||
self._connected = True
|
||||
|
||||
# Start message handler
|
||||
asyncio.create_task(self._handle_messages())
|
||||
|
||||
async def disconnect(self) -> None:
|
||||
"""Disconnect WebSocket"""
|
||||
if self.websocket:
|
||||
await self.websocket.close()
|
||||
self.websocket = None
|
||||
self._connected = False
|
||||
|
||||
async def request(
|
||||
self,
|
||||
method: str,
|
||||
path: str,
|
||||
data: Optional[Dict[str, Any]] = None,
|
||||
params: Optional[Dict[str, Any]] = None,
|
||||
headers: Optional[Dict[str, str]] = None
|
||||
) -> Dict[str, Any]:
|
||||
"""Send request via WebSocket"""
|
||||
if not self.websocket:
|
||||
await self.connect()
|
||||
|
||||
message = {
|
||||
'id': self._generate_id(),
|
||||
'method': method,
|
||||
'path': path,
|
||||
'data': data,
|
||||
'params': params
|
||||
}
|
||||
|
||||
await self.websocket.send(json.dumps(message))
|
||||
response = await self.websocket.recv()
|
||||
return json.loads(response)
|
||||
|
||||
async def stream(
|
||||
self,
|
||||
method: str,
|
||||
path: str,
|
||||
data: Optional[Dict[str, Any]] = None
|
||||
) -> AsyncIterator[Dict[str, Any]]:
|
||||
"""Stream responses from WebSocket"""
|
||||
if not self.websocket:
|
||||
await self.connect()
|
||||
|
||||
# Subscribe to stream
|
||||
subscription_id = self._generate_id()
|
||||
message = {
|
||||
'id': subscription_id,
|
||||
'method': 'subscribe',
|
||||
'path': path,
|
||||
'data': data
|
||||
}
|
||||
|
||||
await self.websocket.send(json.dumps(message))
|
||||
|
||||
# Yield messages as they come
|
||||
async for message in self.websocket:
|
||||
data = json.loads(message)
|
||||
if data.get('subscription_id') == subscription_id:
|
||||
yield data
|
||||
|
||||
async def _handle_messages(self):
|
||||
"""Handle incoming WebSocket messages"""
|
||||
async for message in self.websocket:
|
||||
data = json.loads(message)
|
||||
# Handle subscriptions and other messages
|
||||
pass
|
||||
```
|
||||
|
||||
### Cross-Chain Transport Implementation
|
||||
|
||||
```python
|
||||
from ..settlement.manager import BridgeManager
|
||||
from ..settlement.bridges.base import SettlementMessage, SettlementResult
|
||||
|
||||
class CrossChainTransport(Transport):
|
||||
"""Transport for cross-chain settlements"""
|
||||
|
||||
def __init__(self, config: Dict[str, Any]):
|
||||
super().__init__(config)
|
||||
self.bridge_manager = BridgeManager(config.get('storage'))
|
||||
self.base_transport = config.get('base_transport')
|
||||
|
||||
async def connect(self) -> None:
|
||||
"""Initialize bridge manager"""
|
||||
await self.bridge_manager.initialize(config.get('bridges', {}))
|
||||
if self.base_transport:
|
||||
await self.base_transport.connect()
|
||||
self._connected = True
|
||||
|
||||
async def disconnect(self) -> None:
|
||||
"""Disconnect all bridges"""
|
||||
if self.base_transport:
|
||||
await self.base_transport.disconnect()
|
||||
self._connected = False
|
||||
|
||||
async def request(
|
||||
self,
|
||||
method: str,
|
||||
path: str,
|
||||
data: Optional[Dict[str, Any]] = None,
|
||||
params: Optional[Dict[str, Any]] = None,
|
||||
headers: Optional[Dict[str, str]] = None
|
||||
) -> Dict[str, Any]:
|
||||
"""Handle cross-chain requests"""
|
||||
if path.startswith('/settlement/'):
|
||||
return await self._handle_settlement_request(method, path, data)
|
||||
|
||||
# Forward to base transport for other requests
|
||||
if self.base_transport:
|
||||
return await self.base_transport.request(
|
||||
method, path, data, params, headers
|
||||
)
|
||||
|
||||
raise NotImplementedError(f"Path {path} not supported")
|
||||
|
||||
async def settle_cross_chain(
|
||||
self,
|
||||
message: SettlementMessage,
|
||||
bridge_name: Optional[str] = None
|
||||
) -> SettlementResult:
|
||||
"""Settle message across chains"""
|
||||
return await self.bridge_manager.settle_cross_chain(
|
||||
message, bridge_name
|
||||
)
|
||||
|
||||
async def estimate_settlement_cost(
|
||||
self,
|
||||
message: SettlementMessage,
|
||||
bridge_name: Optional[str] = None
|
||||
) -> Dict[str, Any]:
|
||||
"""Estimate settlement cost"""
|
||||
return await self.bridge_manager.estimate_settlement_cost(
|
||||
message, bridge_name
|
||||
)
|
||||
|
||||
async def _handle_settlement_request(
|
||||
self,
|
||||
method: str,
|
||||
path: str,
|
||||
data: Optional[Dict[str, Any]]
|
||||
) -> Dict[str, Any]:
|
||||
"""Handle settlement-specific requests"""
|
||||
if method == 'POST' and path == '/settlement/cross-chain':
|
||||
message = SettlementMessage(**data)
|
||||
result = await self.settle_cross_chain(message)
|
||||
return {
|
||||
'message_id': result.message_id,
|
||||
'status': result.status.value,
|
||||
'transaction_hash': result.transaction_hash
|
||||
}
|
||||
|
||||
elif method == 'GET' and path.startswith('/settlement/'):
|
||||
message_id = path.split('/')[-1]
|
||||
result = await self.bridge_manager.get_settlement_status(message_id)
|
||||
return {
|
||||
'message_id': message_id,
|
||||
'status': result.status.value,
|
||||
'error_message': result.error_message
|
||||
}
|
||||
|
||||
else:
|
||||
raise ValueError(f"Unsupported settlement request: {method} {path}")
|
||||
```
|
||||
|
||||
## Multi-Network Client
|
||||
|
||||
### Network Configuration
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class NetworkConfig:
|
||||
"""Configuration for a network"""
|
||||
name: str
|
||||
chain_id: int
|
||||
transport: Transport
|
||||
is_default: bool = False
|
||||
bridges: List[str] = None
|
||||
|
||||
class MultiNetworkClient:
|
||||
"""Client supporting multiple networks"""
|
||||
|
||||
def __init__(self):
|
||||
self.networks: Dict[int, NetworkConfig] = {}
|
||||
self.default_network: Optional[int] = None
|
||||
|
||||
def add_network(self, config: NetworkConfig) -> None:
|
||||
"""Add a network configuration"""
|
||||
self.networks[config.chain_id] = config
|
||||
if config.is_default or self.default_network is None:
|
||||
self.default_network = config.chain_id
|
||||
|
||||
def get_transport(self, chain_id: Optional[int] = None) -> Transport:
|
||||
"""Get transport for a network"""
|
||||
network_id = chain_id or self.default_network
|
||||
if network_id not in self.networks:
|
||||
raise ValueError(f"Network {network_id} not configured")
|
||||
|
||||
return self.networks[network_id].transport
|
||||
|
||||
async def connect_all(self) -> None:
|
||||
"""Connect to all configured networks"""
|
||||
for config in self.networks.values():
|
||||
await config.transport.connect()
|
||||
|
||||
async def disconnect_all(self) -> None:
|
||||
"""Disconnect from all networks"""
|
||||
for config in self.networks.values():
|
||||
await config.transport.disconnect()
|
||||
```
|
||||
|
||||
## Updated SDK Client
|
||||
|
||||
### New Client Implementation
|
||||
|
||||
```python
|
||||
class AITBCClient:
|
||||
"""AITBC client with pluggable transports"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
transport: Optional[Union[Transport, Dict[str, Any]]] = None,
|
||||
multi_network: bool = False
|
||||
):
|
||||
if multi_network:
|
||||
self._init_multi_network(transport or {})
|
||||
else:
|
||||
self._init_single_network(transport or {})
|
||||
|
||||
def _init_single_network(self, transport_config: Dict[str, Any]) -> None:
|
||||
"""Initialize single network client"""
|
||||
if isinstance(transport_config, Transport):
|
||||
self.transport = transport_config
|
||||
else:
|
||||
# Default to HTTP transport
|
||||
self.transport = HTTPTransport(transport_config)
|
||||
|
||||
self.multi_network = False
|
||||
self._init_apis()
|
||||
|
||||
def _init_multi_network(self, configs: Dict[str, Any]) -> None:
|
||||
"""Initialize multi-network client"""
|
||||
self.multi_network_client = MultiNetworkClient()
|
||||
|
||||
# Configure networks
|
||||
for name, config in configs.get('networks', {}).items():
|
||||
transport = self._create_transport(config)
|
||||
network_config = NetworkConfig(
|
||||
name=name,
|
||||
chain_id=config['chain_id'],
|
||||
transport=transport,
|
||||
is_default=config.get('default', False)
|
||||
)
|
||||
self.multi_network_client.add_network(network_config)
|
||||
|
||||
self.multi_network = True
|
||||
self._init_apis()
|
||||
|
||||
def _create_transport(self, config: Dict[str, Any]) -> Transport:
|
||||
"""Create transport from config"""
|
||||
transport_type = config.get('type', 'http')
|
||||
|
||||
if transport_type == 'http':
|
||||
return HTTPTransport(config)
|
||||
elif transport_type == 'websocket':
|
||||
return WebSocketTransport(config)
|
||||
elif transport_type == 'crosschain':
|
||||
return CrossChainTransport(config)
|
||||
else:
|
||||
raise ValueError(f"Unknown transport type: {transport_type}")
|
||||
|
||||
def _init_apis(self) -> None:
|
||||
"""Initialize API clients"""
|
||||
if self.multi_network:
|
||||
self.jobs = MultiNetworkJobsAPI(self.multi_network_client)
|
||||
self.settlement = MultiNetworkSettlementAPI(self.multi_network_client)
|
||||
else:
|
||||
self.jobs = JobsAPI(self.transport)
|
||||
self.settlement = SettlementAPI(self.transport)
|
||||
|
||||
# Other APIs remain the same but use the transport
|
||||
self.marketplace = MarketplaceAPI(self.transport)
|
||||
self.wallet = WalletAPI(self.transport)
|
||||
self.receipts = ReceiptsAPI(self.transport)
|
||||
|
||||
async def connect(self) -> None:
|
||||
"""Connect to network(s)"""
|
||||
if self.multi_network:
|
||||
await self.multi_network_client.connect_all()
|
||||
else:
|
||||
await self.transport.connect()
|
||||
|
||||
async def disconnect(self) -> None:
|
||||
"""Disconnect from network(s)"""
|
||||
if self.multi_network:
|
||||
await self.multi_network_client.disconnect_all()
|
||||
else:
|
||||
await self.transport.disconnect()
|
||||
```
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Single Network with HTTP Transport
|
||||
|
||||
```python
|
||||
from aitbc import AITBCClient, HTTPTransport
|
||||
|
||||
# Create client with HTTP transport
|
||||
transport = HTTPTransport({
|
||||
'base_url': 'https://api.aitbc.io',
|
||||
'timeout': 30,
|
||||
'default_headers': {'X-API-Key': 'your-key'}
|
||||
})
|
||||
|
||||
client = AITBCClient(transport)
|
||||
await client.connect()
|
||||
|
||||
# Use APIs normally
|
||||
job = await client.jobs.create({...})
|
||||
```
|
||||
|
||||
### Multi-Network Configuration
|
||||
|
||||
```python
|
||||
from aitbc import AITBCClient
|
||||
|
||||
config = {
|
||||
'networks': {
|
||||
'ethereum': {
|
||||
'type': 'http',
|
||||
'chain_id': 1,
|
||||
'base_url': 'https://api.aitbc.io',
|
||||
'default': True
|
||||
},
|
||||
'polygon': {
|
||||
'type': 'http',
|
||||
'chain_id': 137,
|
||||
'base_url': 'https://polygon-api.aitbc.io'
|
||||
},
|
||||
'arbitrum': {
|
||||
'type': 'crosschain',
|
||||
'chain_id': 42161,
|
||||
'base_transport': HTTPTransport({
|
||||
'base_url': 'https://arbitrum-api.aitbc.io'
|
||||
}),
|
||||
'bridges': {
|
||||
'layerzero': {'enabled': True},
|
||||
'chainlink': {'enabled': True}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
client = AITBCClient(config, multi_network=True)
|
||||
await client.connect()
|
||||
|
||||
# Create job on specific network
|
||||
job = await client.jobs.create({...}, chain_id=137)
|
||||
|
||||
# Settle across chains
|
||||
settlement = await client.settlement.settle_cross_chain(
|
||||
job_id=job['id'],
|
||||
target_chain_id=42161,
|
||||
bridge_name='layerzero'
|
||||
)
|
||||
```
|
||||
|
||||
### Cross-Chain Settlement
|
||||
|
||||
```python
|
||||
# Create job on Ethereum
|
||||
job = await client.jobs.create({
|
||||
'name': 'cross-chain-ai-job',
|
||||
'target_chain': 42161, # Arbitrum
|
||||
'requires_cross_chain_settlement': True
|
||||
})
|
||||
|
||||
# Wait for completion
|
||||
result = await client.jobs.wait_for_completion(job['id'])
|
||||
|
||||
# Settle to Arbitrum
|
||||
settlement = await client.settlement.settle_cross_chain(
|
||||
job_id=job['id'],
|
||||
target_chain_id=42161,
|
||||
bridge_name='layerzero'
|
||||
)
|
||||
|
||||
# Monitor settlement
|
||||
status = await client.settlement.get_status(settlement['message_id'])
|
||||
```
|
||||
|
||||
## Migration Guide
|
||||
|
||||
### From Current SDK
|
||||
|
||||
```python
|
||||
# Old way
|
||||
client = AITBCClient(api_key='key', base_url='url')
|
||||
|
||||
# New way (backward compatible)
|
||||
client = AITBCClient({
|
||||
'base_url': 'url',
|
||||
'default_headers': {'X-API-Key': 'key'}
|
||||
})
|
||||
|
||||
# Or with explicit transport
|
||||
transport = HTTPTransport({
|
||||
'base_url': 'url',
|
||||
'default_headers': {'X-API-Key': 'key'}
|
||||
})
|
||||
client = AITBCClient(transport)
|
||||
```
|
||||
|
||||
## Benefits
|
||||
|
||||
1. **Flexibility**: Easy to add new transport types
|
||||
2. **Multi-Network**: Support for multiple blockchains
|
||||
3. **Cross-Chain**: Built-in support for cross-chain settlements
|
||||
4. **Backward Compatible**: Existing code continues to work
|
||||
5. **Testable**: Easy to mock transports for testing
|
||||
6. **Extensible**: Plugin architecture for custom transports
|
||||
|
||||
---
|
||||
|
||||
*Document Version: 1.0*
|
||||
*Last Updated: 2025-01-10*
|
||||
*Owner: SDK Team*
|
||||
@ -127,7 +127,7 @@ python3 -m venv venv && source venv/bin/activate
|
||||
pip install fastapi uvicorn[standard] torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
|
||||
pip install diffusers transformers accelerate pillow safetensors xformers slowapi httpx
|
||||
# Start
|
||||
uvicorn app:app --host 0.0.0.0 --port 8000 --workers 1
|
||||
uvicorn app:app --host 127.0.0.2 --port 8000 --workers 1
|
||||
```
|
||||
|
||||
## Akzeptanzkriterien
|
||||
@ -166,7 +166,7 @@ uvicorn app:app --host 0.0.0.0 --port 8000 --workers 1
|
||||
```
|
||||
API_KEY=CHANGE_ME_SUPERSECRET
|
||||
MODEL_ID=runwayml/stable-diffusion-v1-5
|
||||
BIND_HOST=0.0.0.0
|
||||
BIND_HOST=127.0.0.2
|
||||
BIND_PORT=8000
|
||||
```
|
||||
|
||||
@ -257,7 +257,7 @@ def generate(req: GenRequest, request: Request):
|
||||
|
||||
if __name__ == "__main__":
|
||||
import uvicorn, os
|
||||
uvicorn.run("server:app", host=os.getenv("BIND_HOST", "0.0.0.0"), port=int(os.getenv("BIND_PORT", "8000")), reload=False)
|
||||
uvicorn.run("server:app", host=os.getenv("BIND_HOST", "127.0.0.2"), port=int(os.getenv("BIND_PORT", "8000")), reload=False)
|
||||
```
|
||||
|
||||
## `client.py`
|
||||
@ -252,8 +252,8 @@ Provide `scripts/make_genesis.py`.
|
||||
## 17) Configuration (ENV)
|
||||
- `CHAIN_ID=ait-devnet`
|
||||
- `DB_PATH=./data/chain.db`
|
||||
- `P2P_BIND=0.0.0.0:7070`
|
||||
- `RPC_BIND=0.0.0.0:8080`
|
||||
- `P2P_BIND=127.0.0.2:7070`
|
||||
- `RPC_BIND=127.0.0.2:8080`
|
||||
- `BOOTSTRAP_PEERS=ws://host:7070,...`
|
||||
- `PROPOSER_KEY=...` (optional for non-authors)
|
||||
- `MINT_PER_UNIT=1000`
|
||||
@ -33,7 +33,7 @@ The minimal info Windsurf needs to spin everything up quickly:
|
||||
- **GPU optional**: ensure `nvidia-smi` works for CUDA path.
|
||||
3. **Boot the Mock Coordinator** (new terminal):
|
||||
```bash
|
||||
uvicorn mock_coordinator:app --reload --host 0.0.0.0 --port 8080
|
||||
uvicorn mock_coordinator:app --reload --host 127.0.0.2 --port 8080
|
||||
```
|
||||
4. **Install & Start Miner**
|
||||
```bash
|
||||
185
docs/reference/confidential-implementation-summary.md
Normal file
185
docs/reference/confidential-implementation-summary.md
Normal file
@ -0,0 +1,185 @@
|
||||
# Confidential Transactions Implementation Summary
|
||||
|
||||
## Overview
|
||||
|
||||
Successfully implemented a comprehensive confidential transaction system for AITBC with opt-in encryption, selective disclosure, and full audit compliance. The implementation provides privacy for sensitive transaction data while maintaining regulatory compliance.
|
||||
|
||||
## Completed Components
|
||||
|
||||
### 1. Encryption Service ✅
|
||||
- **Hybrid Encryption**: AES-256-GCM for data encryption, X25519 for key exchange
|
||||
- **Envelope Pattern**: Random DEK per transaction, encrypted for each participant
|
||||
- **Audit Escrow**: Separate encryption key for regulatory access
|
||||
- **Performance**: Efficient batch operations, key caching
|
||||
|
||||
### 2. Key Management ✅
|
||||
- **Per-Participant Keys**: X25519 key pairs for each participant
|
||||
- **Key Rotation**: Automated rotation with re-encryption of active data
|
||||
- **Secure Storage**: File-based storage (development), HSM-ready interface
|
||||
- **Access Control**: Role-based permissions for key operations
|
||||
|
||||
### 3. Access Control ✅
|
||||
- **Role-Based Policies**: Client, Miner, Coordinator, Auditor, Regulator roles
|
||||
- **Time Restrictions**: Business hours, retention periods
|
||||
- **Purpose-Based Access**: Settlement, Audit, Compliance, Dispute, Support
|
||||
- **Dynamic Policies**: Custom policy creation and management
|
||||
|
||||
### 4. Audit Logging ✅
|
||||
- **Tamper-Evident**: Chain of hashes for integrity verification
|
||||
- **Comprehensive**: All access, key operations, policy changes
|
||||
- **Export Capabilities**: JSON, CSV formats for regulators
|
||||
- **Retention**: Configurable retention periods by role
|
||||
|
||||
### 5. API Endpoints ✅
|
||||
- **/confidential/transactions**: Create and manage confidential transactions
|
||||
- **/confidential/access**: Request access to encrypted data
|
||||
- **/confidential/audit**: Regulatory access with authorization
|
||||
- **/confidential/keys**: Key registration and rotation
|
||||
- **Rate Limiting**: Protection against abuse
|
||||
|
||||
### 6. Data Models ✅
|
||||
- **ConfidentialTransaction**: Opt-in privacy flags
|
||||
- **Access Control Models**: Requests, responses, logs
|
||||
- **Key Management Models**: Registration, rotation, audit
|
||||
|
||||
## Security Features
|
||||
|
||||
### Encryption
|
||||
- AES-256-GCM provides confidentiality + integrity
|
||||
- X25519 ECDH for secure key exchange
|
||||
- Per-transaction DEKs for forward secrecy
|
||||
- Random IVs per encryption
|
||||
|
||||
### Access Control
|
||||
- Multi-factor authentication ready
|
||||
- Time-bound access permissions
|
||||
- Business hour restrictions for auditors
|
||||
- Retention period enforcement
|
||||
|
||||
### Audit Compliance
|
||||
- GDPR right to encryption
|
||||
- SEC Rule 17a-4 compliance
|
||||
- Immutable audit trails
|
||||
- Regulatory access with court orders
|
||||
|
||||
## Current Limitations
|
||||
|
||||
### 1. Database Persistence ❌
|
||||
- Current implementation uses mock storage
|
||||
- Needs SQLModel/SQLAlchemy integration
|
||||
- Transaction storage and querying
|
||||
- Encrypted data BLOB handling
|
||||
|
||||
### 2. Private Key Security ❌
|
||||
- File storage writes keys unencrypted
|
||||
- Needs HSM or KMS integration
|
||||
- Key escrow for recovery
|
||||
- Hardware security module support
|
||||
|
||||
### 3. Async Issues ❌
|
||||
- AuditLogger uses threading in async context
|
||||
- Needs asyncio task conversion
|
||||
- Background writer refactoring
|
||||
- Proper async/await patterns
|
||||
|
||||
### 4. Rate Limiting ⚠️
|
||||
- slowapi not properly integrated
|
||||
- Needs FastAPI app state setup
|
||||
- Distributed rate limiting for production
|
||||
- Redis backend for scalability
|
||||
|
||||
## Production Readiness Checklist
|
||||
|
||||
### Critical (Must Fix)
|
||||
- [ ] Database persistence layer
|
||||
- [ ] HSM/KMS integration for private keys
|
||||
- [ ] Fix async issues in audit logging
|
||||
- [ ] Proper rate limiting setup
|
||||
|
||||
### Important (Should Fix)
|
||||
- [ ] Performance optimization for high volume
|
||||
- [ ] Distributed key management
|
||||
- [ ] Backup and recovery procedures
|
||||
- [ ] Monitoring and alerting
|
||||
|
||||
### Nice to Have (Future)
|
||||
- [ ] Multi-party computation
|
||||
- [ ] Zero-knowledge proofs integration
|
||||
- [ ] Advanced privacy features
|
||||
- [ ] Cross-chain confidential settlements
|
||||
|
||||
## Testing Coverage
|
||||
|
||||
### Unit Tests ✅
|
||||
- Encryption/decryption correctness
|
||||
- Key management operations
|
||||
- Access control logic
|
||||
- Audit logging functionality
|
||||
|
||||
### Integration Tests ✅
|
||||
- End-to-end transaction flow
|
||||
- Cross-service integration
|
||||
- API endpoint testing
|
||||
- Error handling scenarios
|
||||
|
||||
### Performance Tests ⚠️
|
||||
- Basic benchmarks included
|
||||
- Needs load testing
|
||||
- Scalability assessment
|
||||
- Resource usage profiling
|
||||
|
||||
## Migration Strategy
|
||||
|
||||
### Phase 1: Infrastructure (Week 1-2)
|
||||
1. Implement database persistence
|
||||
2. Integrate HSM for key storage
|
||||
3. Fix async issues
|
||||
4. Set up proper rate limiting
|
||||
|
||||
### Phase 2: Security Hardening (Week 3-4)
|
||||
1. Security audit and penetration testing
|
||||
2. Implement additional monitoring
|
||||
3. Create backup procedures
|
||||
4. Document security controls
|
||||
|
||||
### Phase 3: Production Rollout (Month 2)
|
||||
1. Gradual rollout with feature flags
|
||||
2. Performance monitoring
|
||||
3. User training and documentation
|
||||
4. Compliance validation
|
||||
|
||||
## Compliance Status
|
||||
|
||||
### GDPR ✅
|
||||
- Right to encryption implemented
|
||||
- Data minimization by design
|
||||
- Privacy by default
|
||||
|
||||
### Financial Regulations ✅
|
||||
- SEC Rule 17a-4 audit logs
|
||||
- MiFID II transaction reporting
|
||||
- AML/KYC integration points
|
||||
|
||||
### Industry Standards ✅
|
||||
- ISO 27001 alignment
|
||||
- NIST Cybersecurity Framework
|
||||
- PCI DSS considerations
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. **Immediate**: Fix database persistence and HSM integration
|
||||
2. **Short-term**: Complete security hardening and testing
|
||||
3. **Long-term**: Production deployment and monitoring
|
||||
|
||||
## Documentation
|
||||
|
||||
- [Architecture Design](confidential-transactions.md)
|
||||
- [API Documentation](../docs/api/coordinator/endpoints.md)
|
||||
- [Security Guide](security-guidelines.md)
|
||||
- [Compliance Matrix](compliance-matrix.md)
|
||||
|
||||
## Conclusion
|
||||
|
||||
The confidential transaction system provides a solid foundation for privacy-preserving transactions in AITBC. While the core functionality is complete and tested, several production readiness items need to be addressed before deployment.
|
||||
|
||||
The modular design allows for incremental improvements and ensures the system can evolve with changing requirements and regulations.
|
||||
354
docs/reference/confidential-transactions.md
Normal file
354
docs/reference/confidential-transactions.md
Normal file
@ -0,0 +1,354 @@
|
||||
# Confidential Transactions Architecture
|
||||
|
||||
## Overview
|
||||
|
||||
Design for opt-in confidential transaction support in AITBC, enabling participants to encrypt sensitive transaction data while maintaining selective disclosure and audit capabilities.
|
||||
|
||||
## Architecture
|
||||
|
||||
### Encryption Model
|
||||
|
||||
**Hybrid Encryption with Envelope Pattern**:
|
||||
1. **Data Encryption**: AES-256-GCM for transaction data
|
||||
2. **Key Exchange**: X25519 ECDH for per-recipient key distribution
|
||||
3. **Envelope Pattern**: Random DEK per transaction, encrypted for each authorized party
|
||||
|
||||
### Key Components
|
||||
|
||||
```
|
||||
┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
|
||||
│ Transaction │───▶│ Encryption │───▶│ Storage │
|
||||
│ Service │ │ Service │ │ Layer │
|
||||
└─────────────────┘ └──────────────────┘ └─────────────────┘
|
||||
│ │ │
|
||||
▼ ▼ ▼
|
||||
┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
|
||||
│ Key Manager │ │ Access Control │ │ Audit Log │
|
||||
└─────────────────┘ └──────────────────┘ └─────────────────┘
|
||||
```
|
||||
|
||||
## Data Flow
|
||||
|
||||
### 1. Transaction Creation (Opt-in)
|
||||
|
||||
```python
|
||||
# Client requests confidential transaction
|
||||
transaction = {
|
||||
"job_id": "job-123",
|
||||
"amount": "1000",
|
||||
"confidential": True,
|
||||
"participants": ["client-456", "miner-789", "auditor-001"]
|
||||
}
|
||||
|
||||
# Coordinator encrypts sensitive fields
|
||||
encrypted = encryption_service.encrypt(
|
||||
data={"amount": "1000", "pricing": "details"},
|
||||
participants=transaction["participants"]
|
||||
)
|
||||
|
||||
# Store with encrypted payload
|
||||
stored_transaction = {
|
||||
"job_id": "job-123",
|
||||
"public_data": {"job_id": "job-123"},
|
||||
"encrypted_data": encrypted.ciphertext,
|
||||
"encrypted_keys": encrypted.encrypted_keys,
|
||||
"confidential": True
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Data Access (Authorized Party)
|
||||
|
||||
```python
|
||||
# Miner requests access to transaction data
|
||||
access_request = {
|
||||
"transaction_id": "tx-456",
|
||||
"requester": "miner-789",
|
||||
"purpose": "settlement"
|
||||
}
|
||||
|
||||
# Verify access rights
|
||||
if access_control.verify(access_request):
|
||||
# Decrypt using recipient's private key
|
||||
decrypted = encryption_service.decrypt(
|
||||
ciphertext=stored_transaction.encrypted_data,
|
||||
encrypted_key=stored_transaction.encrypted_keys["miner-789"],
|
||||
private_key=miner_private_key
|
||||
)
|
||||
```
|
||||
|
||||
### 3. Audit Access (Regulatory)
|
||||
|
||||
```python
|
||||
# Auditor with court order requests access
|
||||
audit_request = {
|
||||
"transaction_id": "tx-456",
|
||||
"requester": "auditor-001",
|
||||
"authorization": "court-order-123"
|
||||
}
|
||||
|
||||
# Special audit key escrow
|
||||
audit_key = key_manager.get_audit_key(audit_request.authorization)
|
||||
decrypted = encryption_service.audit_decrypt(
|
||||
ciphertext=stored_transaction.encrypted_data,
|
||||
audit_key=audit_key
|
||||
)
|
||||
```
|
||||
|
||||
## Implementation Details
|
||||
|
||||
### Encryption Service
|
||||
|
||||
```python
|
||||
class ConfidentialTransactionService:
|
||||
"""Service for handling confidential transactions"""
|
||||
|
||||
def __init__(self, key_manager: KeyManager):
|
||||
self.key_manager = key_manager
|
||||
self.cipher = AES256GCM()
|
||||
|
||||
def encrypt(self, data: Dict, participants: List[str]) -> EncryptedData:
|
||||
"""Encrypt data for multiple participants"""
|
||||
# Generate random DEK
|
||||
dek = os.urandom(32)
|
||||
|
||||
# Encrypt data with DEK
|
||||
ciphertext = self.cipher.encrypt(dek, json.dumps(data))
|
||||
|
||||
# Encrypt DEK for each participant
|
||||
encrypted_keys = {}
|
||||
for participant in participants:
|
||||
public_key = self.key_manager.get_public_key(participant)
|
||||
encrypted_keys[participant] = self._encrypt_dek(dek, public_key)
|
||||
|
||||
# Add audit escrow
|
||||
audit_public_key = self.key_manager.get_audit_key()
|
||||
encrypted_keys["audit"] = self._encrypt_dek(dek, audit_public_key)
|
||||
|
||||
return EncryptedData(
|
||||
ciphertext=ciphertext,
|
||||
encrypted_keys=encrypted_keys,
|
||||
algorithm="AES-256-GCM+X25519"
|
||||
)
|
||||
|
||||
def decrypt(self, ciphertext: bytes, encrypted_key: bytes,
|
||||
private_key: bytes) -> Dict:
|
||||
"""Decrypt data for specific participant"""
|
||||
# Decrypt DEK
|
||||
dek = self._decrypt_dek(encrypted_key, private_key)
|
||||
|
||||
# Decrypt data
|
||||
plaintext = self.cipher.decrypt(dek, ciphertext)
|
||||
return json.loads(plaintext)
|
||||
```
|
||||
|
||||
### Key Management
|
||||
|
||||
```python
|
||||
class KeyManager:
|
||||
"""Manages encryption keys for participants"""
|
||||
|
||||
def __init__(self, storage: KeyStorage):
|
||||
self.storage = storage
|
||||
self.key_pairs = {}
|
||||
|
||||
def generate_key_pair(self, participant_id: str) -> KeyPair:
|
||||
"""Generate X25519 key pair for participant"""
|
||||
private_key = X25519.generate_private_key()
|
||||
public_key = private_key.public_key()
|
||||
|
||||
key_pair = KeyPair(
|
||||
participant_id=participant_id,
|
||||
private_key=private_key,
|
||||
public_key=public_key
|
||||
)
|
||||
|
||||
self.storage.store(key_pair)
|
||||
return key_pair
|
||||
|
||||
def rotate_keys(self, participant_id: str):
|
||||
"""Rotate encryption keys"""
|
||||
# Generate new key pair
|
||||
new_key_pair = self.generate_key_pair(participant_id)
|
||||
|
||||
# Re-encrypt active transactions
|
||||
self._reencrypt_transactions(participant_id, new_key_pair)
|
||||
```
|
||||
|
||||
### Access Control
|
||||
|
||||
```python
|
||||
class AccessController:
|
||||
"""Controls access to confidential transaction data"""
|
||||
|
||||
def __init__(self, policy_store: PolicyStore):
|
||||
self.policy_store = policy_store
|
||||
|
||||
def verify_access(self, request: AccessRequest) -> bool:
|
||||
"""Verify if requester has access rights"""
|
||||
# Check participant status
|
||||
if not self._is_authorized_participant(request.requester):
|
||||
return False
|
||||
|
||||
# Check purpose-based access
|
||||
if not self._check_purpose(request.purpose, request.requester):
|
||||
return False
|
||||
|
||||
# Check time-based restrictions
|
||||
if not self._check_time_restrictions(request):
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
def _is_authorized_participant(self, participant_id: str) -> bool:
|
||||
"""Check if participant is authorized for confidential transactions"""
|
||||
# Verify KYC/KYB status
|
||||
# Check compliance flags
|
||||
# Validate regulatory approval
|
||||
return True
|
||||
```
|
||||
|
||||
## Data Models
|
||||
|
||||
### Confidential Transaction
|
||||
|
||||
```python
|
||||
class ConfidentialTransaction(BaseModel):
|
||||
"""Transaction with optional confidential fields"""
|
||||
|
||||
# Public fields (always visible)
|
||||
transaction_id: str
|
||||
job_id: str
|
||||
timestamp: datetime
|
||||
status: str
|
||||
|
||||
# Confidential fields (encrypted when opt-in)
|
||||
amount: Optional[str] = None
|
||||
pricing: Optional[Dict] = None
|
||||
settlement_details: Optional[Dict] = None
|
||||
|
||||
# Encryption metadata
|
||||
confidential: bool = False
|
||||
encrypted_data: Optional[bytes] = None
|
||||
encrypted_keys: Optional[Dict[str, bytes]] = None
|
||||
algorithm: Optional[str] = None
|
||||
|
||||
# Access control
|
||||
participants: List[str] = []
|
||||
access_policies: Dict[str, Any] = {}
|
||||
```
|
||||
|
||||
### Access Log
|
||||
|
||||
```python
|
||||
class ConfidentialAccessLog(BaseModel):
|
||||
"""Audit log for confidential data access"""
|
||||
|
||||
transaction_id: str
|
||||
requester: str
|
||||
purpose: str
|
||||
timestamp: datetime
|
||||
authorized_by: str
|
||||
data_accessed: List[str]
|
||||
ip_address: str
|
||||
user_agent: str
|
||||
```
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### 1. Key Security
|
||||
- Private keys stored in HSM or secure enclave
|
||||
- Key rotation every 90 days
|
||||
- Zero-knowledge proof of key possession
|
||||
|
||||
### 2. Data Protection
|
||||
- AES-256-GCM provides confidentiality + integrity
|
||||
- Random IV per encryption
|
||||
- Forward secrecy with per-transaction DEKs
|
||||
|
||||
### 3. Access Control
|
||||
- Multi-factor authentication for decryption
|
||||
- Role-based access control
|
||||
- Time-bound access permissions
|
||||
|
||||
### 4. Audit Compliance
|
||||
- Immutable audit logs
|
||||
- Regulatory access with court orders
|
||||
- Privacy-preserving audit proofs
|
||||
|
||||
## Performance Optimization
|
||||
|
||||
### 1. Lazy Encryption
|
||||
- Only encrypt fields marked as confidential
|
||||
- Cache encrypted data for frequent access
|
||||
- Batch encryption for bulk operations
|
||||
|
||||
### 2. Key Management
|
||||
- Pre-compute shared secrets for regular participants
|
||||
- Use key derivation for multiple access levels
|
||||
- Implement key caching with secure eviction
|
||||
|
||||
### 3. Storage Optimization
|
||||
- Compress encrypted data
|
||||
- Deduplicate common encrypted patterns
|
||||
- Use column-level encryption for databases
|
||||
|
||||
## Migration Strategy
|
||||
|
||||
### Phase 1: Opt-in Support
|
||||
- Add confidential flags to existing models
|
||||
- Deploy encryption service
|
||||
- Update transaction endpoints
|
||||
|
||||
### Phase 2: Participant Onboarding
|
||||
- Generate key pairs for all participants
|
||||
- Implement key distribution
|
||||
- Train users on privacy features
|
||||
|
||||
### Phase 3: Full Rollout
|
||||
- Enable confidential transactions by default for sensitive data
|
||||
- Implement advanced access controls
|
||||
- Add privacy analytics and reporting
|
||||
|
||||
## Testing Strategy
|
||||
|
||||
### 1. Unit Tests
|
||||
- Encryption/decryption correctness
|
||||
- Key management operations
|
||||
- Access control logic
|
||||
|
||||
### 2. Integration Tests
|
||||
- End-to-end confidential transaction flow
|
||||
- Cross-system key exchange
|
||||
- Audit trail verification
|
||||
|
||||
### 3. Security Tests
|
||||
- Penetration testing
|
||||
- Cryptographic validation
|
||||
- Side-channel resistance
|
||||
|
||||
## Compliance
|
||||
|
||||
### 1. GDPR
|
||||
- Right to encryption
|
||||
- Data minimization
|
||||
- Privacy by design
|
||||
|
||||
### 2. Financial Regulations
|
||||
- SEC Rule 17a-4
|
||||
- MiFID II transaction reporting
|
||||
- AML/KYC requirements
|
||||
|
||||
### 3. Industry Standards
|
||||
- ISO 27001
|
||||
- NIST Cybersecurity Framework
|
||||
- PCI DSS for payment data
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. Implement core encryption service
|
||||
2. Create key management infrastructure
|
||||
3. Update transaction models and APIs
|
||||
4. Deploy access control system
|
||||
5. Implement audit logging
|
||||
6. Conduct security testing
|
||||
7. Gradual rollout with monitoring
|
||||
192
docs/reference/docs-gaps.md
Normal file
192
docs/reference/docs-gaps.md
Normal file
@ -0,0 +1,192 @@
|
||||
# AITBC Documentation Gaps Report
|
||||
|
||||
This document identifies missing documentation for completed features based on the `done.md` file and current documentation state.
|
||||
|
||||
## Critical Missing Documentation
|
||||
|
||||
### 1. Zero-Knowledge Proof Receipt Attestation
|
||||
**Status**: ✅ Completed (Implementation in Stage 7)
|
||||
**Missing Documentation**:
|
||||
- [ ] User guide: How to use ZK proofs for receipt attestation
|
||||
- [ ] Developer guide: Integrating ZK proofs into applications
|
||||
- [ ] Operator guide: Setting up ZK proof generation service
|
||||
- [ ] API reference: ZK proof endpoints and parameters
|
||||
- [ ] Tutorial: End-to-end ZK proof workflow
|
||||
|
||||
**Priority**: High - Complex feature requiring user education
|
||||
|
||||
### 2. Confidential Transactions
|
||||
**Status**: ✅ Completed (Implementation in Stage 7)
|
||||
**Existing**: Technical implementation docs
|
||||
**Missing Documentation**:
|
||||
- [ ] User guide: How to create confidential transactions
|
||||
- [ ] Developer guide: Building privacy-preserving applications
|
||||
- [ ] Migration guide: Moving from regular to confidential transactions
|
||||
- [ ] Security considerations: Best practices for confidential transactions
|
||||
|
||||
**Priority**: High - Security-sensitive feature
|
||||
|
||||
### 3. HSM Key Management
|
||||
**Status**: ✅ Completed (Implementation in Stage 7)
|
||||
**Missing Documentation**:
|
||||
- [ ] Operator guide: HSM setup and configuration
|
||||
- [ ] Integration guide: Azure Key Vault integration
|
||||
- [ ] Integration guide: AWS KMS integration
|
||||
- [ ] Security guide: HSM best practices
|
||||
- [ ] Troubleshooting: Common HSM issues
|
||||
|
||||
**Priority**: High - Enterprise feature
|
||||
|
||||
### 4. Multi-tenant Coordinator Infrastructure
|
||||
**Status**: ✅ Completed (Implementation in Stage 7)
|
||||
**Missing Documentation**:
|
||||
- [ ] Architecture guide: Multi-tenant architecture overview
|
||||
- [ ] Operator guide: Setting up multi-tenant infrastructure
|
||||
- [ ] Tenant management: Creating and managing tenants
|
||||
- [ ] Billing guide: Understanding billing and quotas
|
||||
- [ ] Migration guide: Moving to multi-tenant setup
|
||||
|
||||
**Priority**: High - Major architectural change
|
||||
|
||||
### 5. Enterprise Connectors (Python SDK)
|
||||
**Status**: ✅ Completed (Implementation in Stage 7)
|
||||
**Existing**: Technical implementation
|
||||
**Missing Documentation**:
|
||||
- [ ] Quick start: Getting started with enterprise connectors
|
||||
- [ ] Connector guide: Stripe connector usage
|
||||
- [ ] Connector guide: ERP connector usage
|
||||
- [ ] Development guide: Building custom connectors
|
||||
- [ ] Reference: Complete API documentation
|
||||
|
||||
**Priority**: Medium - Developer-facing feature
|
||||
|
||||
### 6. Ecosystem Certification Program
|
||||
**Status**: ✅ Completed (Implementation in Stage 7)
|
||||
**Existing**: Program documentation
|
||||
**Missing Documentation**:
|
||||
- [ ] Participant guide: How to get certified
|
||||
- [ ] Self-service portal: Using the certification portal
|
||||
- [ ] Badge guide: Displaying certification badges
|
||||
- [ ] Maintenance guide: Maintaining certification status
|
||||
|
||||
**Priority**: Medium - Program adoption
|
||||
|
||||
## Moderate Priority Gaps
|
||||
|
||||
### 7. Cross-Chain Settlement
|
||||
**Status**: ✅ Completed (Implementation in Stage 6)
|
||||
**Existing**: Design documentation
|
||||
**Missing Documentation**:
|
||||
- [ ] Integration guide: Setting up cross-chain bridges
|
||||
- [ ] Tutorial: Cross-chain transaction walkthrough
|
||||
- [ ] Reference: Bridge API documentation
|
||||
|
||||
### 8. GPU Service Registry (30+ Services)
|
||||
**Status**: ✅ Completed (Implementation in Stage 7)
|
||||
**Missing Documentation**:
|
||||
- [ ] Provider guide: Registering GPU services
|
||||
- [ ] Service catalog: Available service types
|
||||
- [ ] Pricing guide: Setting service prices
|
||||
- [ ] Integration guide: Using GPU services
|
||||
|
||||
### 9. Advanced Cryptography Features
|
||||
**Status**: ✅ Completed (Implementation in Stage 7)
|
||||
**Missing Documentation**:
|
||||
- [ ] Hybrid encryption guide: Using AES-256-GCM + X25519
|
||||
- [ ] Role-based access control: Setting up RBAC
|
||||
- [ ] Audit logging: Configuring tamper-evident logging
|
||||
|
||||
## Low Priority Gaps
|
||||
|
||||
### 10. Community & Governance
|
||||
**Status**: ✅ Completed (Implementation in Stage 7)
|
||||
**Existing**: Framework documentation
|
||||
**Missing Documentation**:
|
||||
- [ ] Governance website: User guide for governance site
|
||||
- [ ] RFC templates: Detailed RFC writing guide
|
||||
- [ ] Community metrics: Understanding KPIs
|
||||
|
||||
### 11. Ecosystem Growth Initiatives
|
||||
**Status**: ✅ Completed (Implementation in Stage 7)
|
||||
**Existing**: Program documentation
|
||||
**Missing Documentation**:
|
||||
- [ ] Hackathon platform: Using the submission platform
|
||||
- [ ] Grant tracking: Monitoring grant progress
|
||||
- [ ] Extension marketplace: Publishing extensions
|
||||
|
||||
## Documentation Structure Improvements
|
||||
|
||||
### Missing Sections
|
||||
1. **Migration Guides** - No migration documentation for major changes
|
||||
2. **Troubleshooting** - Limited troubleshooting guides
|
||||
3. **Best Practices** - Few best practice documents
|
||||
4. **Performance Guides** - No performance optimization guides
|
||||
5. **Security Guides** - Limited security documentation beyond threat modeling
|
||||
|
||||
### Outdated Documentation
|
||||
1. **API References** - May not reflect latest endpoints
|
||||
2. **Installation Guides** - May not include all components
|
||||
3. **Configuration** - Missing new configuration options
|
||||
|
||||
## Recommended Actions
|
||||
|
||||
### Immediate (Next Sprint)
|
||||
1. Create ZK proof user guide and developer tutorial
|
||||
2. Document HSM integration for Azure Key Vault and AWS KMS
|
||||
3. Write multi-tenant setup guide for operators
|
||||
4. Create confidential transaction quick start
|
||||
|
||||
### Short Term (Next Month)
|
||||
1. Complete enterprise connector documentation
|
||||
2. Add cross-chain settlement integration guides
|
||||
3. Document GPU service provider workflow
|
||||
4. Create migration guides for major features
|
||||
|
||||
### Medium Term (Next Quarter)
|
||||
1. Expand troubleshooting section
|
||||
2. Add performance optimization guides
|
||||
3. Create security best practices documentation
|
||||
4. Build interactive tutorials for complex features
|
||||
|
||||
### Long Term (Next 6 Months)
|
||||
1. Create video tutorials for key workflows
|
||||
2. Build interactive API documentation
|
||||
3. Add regional deployment guides
|
||||
4. Create compliance documentation for regulated markets
|
||||
|
||||
## Documentation Metrics
|
||||
|
||||
### Current State
|
||||
- Total markdown files: 65+
|
||||
- Organized into: 5 main categories
|
||||
- Missing critical docs: 11 major features
|
||||
- Coverage estimate: 60% of completed features documented
|
||||
|
||||
### Target State
|
||||
- Critical features: 100% documented
|
||||
- User guides: All major features
|
||||
- Developer resources: Complete API coverage
|
||||
- Operator guides: All deployment scenarios
|
||||
|
||||
## Resources Needed
|
||||
|
||||
### Writers
|
||||
- Technical writer: 1 FTE for 3 months
|
||||
- Developer advocates: 2 FTE for tutorials
|
||||
- Security specialist: For security documentation
|
||||
|
||||
### Tools
|
||||
- Documentation platform: GitBook or Docusaurus
|
||||
- API documentation: Swagger/OpenAPI tools
|
||||
- Interactive tutorials: CodeSandbox or similar
|
||||
|
||||
### Process
|
||||
- Documentation review workflow
|
||||
- Translation process for internationalization
|
||||
- Community contribution process for docs
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: 2024-01-15
|
||||
**Next Review**: 2024-02-15
|
||||
**Owner**: Documentation Team
|
||||
205
docs/reference/done.md
Normal file
205
docs/reference/done.md
Normal file
@ -0,0 +1,205 @@
|
||||
# Completed Bootstrap Tasks
|
||||
|
||||
## Repository Initialization
|
||||
|
||||
- Scaffolded core monorepo directories reflected in `docs/bootstrap/dirs.md`.
|
||||
- Added top-level config files: `.editorconfig`, `.gitignore`, `LICENSE`, and root `README.md`.
|
||||
- Created Windsurf workspace metadata under `windsurf/`.
|
||||
|
||||
## Documentation
|
||||
|
||||
- Authored `docs/roadmap.md` capturing staged development targets.
|
||||
- Added README placeholders for primary apps under `apps/` to outline purpose and setup notes.
|
||||
|
||||
## Coordinator API
|
||||
|
||||
- Implemented SQLModel-backed job persistence and service layer in `apps/coordinator-api/src/app/`.
|
||||
- Wired client, miner, and admin routers to coordinator services (job lifecycle, scheduling, stats).
|
||||
- Added initial pytest coverage under `apps/coordinator-api/tests/test_jobs.py`.
|
||||
- Added signed receipt generation, persistence (`Job.receipt`, `JobReceipt` history table), retrieval endpoints, telemetry metrics, and optional coordinator attestations.
|
||||
- Persisted historical receipts via `JobReceipt`; exposed `/v1/jobs/{job_id}/receipts` endpoint and integrated canonical serialization.
|
||||
- Documented receipt attestation configuration (`RECEIPT_ATTESTATION_KEY_HEX`) in `docs/run.md` and coordinator README.
|
||||
|
||||
## Miner Node
|
||||
|
||||
- Created coordinator client, control loop, and capability/backoff utilities in `apps/miner-node/src/aitbc_miner/`.
|
||||
- Implemented CLI/Python runners and execution pipeline with result reporting.
|
||||
- Added starter tests for runners in `apps/miner-node/tests/test_runners.py`.
|
||||
|
||||
## Blockchain Node
|
||||
|
||||
- Added websocket fan-out, disconnect cleanup, and load-test coverage in `apps/blockchain-node/tests/test_websocket.py`, ensuring gossip topics deliver reliably to multiple subscribers.
|
||||
|
||||
## Directory Preparation
|
||||
|
||||
- Established scaffolds for Python and JavaScript packages in `packages/py/` and `packages/js/`.
|
||||
- Seeded example project directories under `examples/` for quickstart clients and receipt verification.
|
||||
- Added `examples/receipts-sign-verify/fetch_and_verify.py` demonstrating coordinator receipt fetching + verification using Python SDK.
|
||||
|
||||
## Python SDK
|
||||
|
||||
- Created `packages/py/aitbc-sdk/` with coordinator receipt client and verification helpers consuming `aitbc_crypto` utilities.
|
||||
- Added pytest coverage under `packages/py/aitbc-sdk/tests/test_receipts.py` validating miner/coordinator signature checks and client behavior.
|
||||
|
||||
## Wallet Daemon
|
||||
|
||||
- Added `apps/wallet-daemon/src/app/receipts/service.py` providing `ReceiptVerifierService` that fetches and validates receipts via `aitbc_sdk`.
|
||||
- Created unit tests under `apps/wallet-daemon/tests/test_receipts.py` verifying service behavior.
|
||||
- Implemented wallet SDK receipt ingestion + attestation surfacing in `packages/py/aitbc-sdk/src/receipts.py`, including pagination client, signature verification, and failure diagnostics with full pytest coverage.
|
||||
- Hardened REST API by wiring dependency overrides in `apps/wallet-daemon/tests/test_wallet_api.py`, expanding workflow coverage (create/list/unlock/sign) and enforcing structured password policy errors consumed in CI.
|
||||
|
||||
## Explorer Web
|
||||
|
||||
- Initialized a Vite + TypeScript scaffold in `apps/explorer-web/` with `vite.config.ts`, `tsconfig.json`, and placeholder `src/main.ts` content.
|
||||
- Installed frontend dependencies locally to unblock editor tooling and TypeScript type resolution.
|
||||
- Implemented `overview` page stats rendering backed by mock block/transaction/receipt fetchers, including robust empty-state handling and TypeScript type fixes.
|
||||
|
||||
## Pool Hub
|
||||
|
||||
- Implemented FastAPI service scaffolding with Redis/PostgreSQL-backed repositories, match/health/metrics endpoints, and Prometheus instrumentation (`apps/pool-hub/src/poolhub/`).
|
||||
- Added Alembic migrations (`apps/pool-hub/migrations/`) and async integration tests covering repositories and endpoints (`apps/pool-hub/tests/`).
|
||||
|
||||
## Solidity Token
|
||||
|
||||
- Implemented attested minting logic in `packages/solidity/aitbc-token/contracts/AIToken.sol` using `AccessControl` role gates and ECDSA signature recovery.
|
||||
- Added Hardhat unit tests in `packages/solidity/aitbc-token/test/aitoken.test.ts` covering successful minting, replay prevention, and invalid attestor signatures.
|
||||
- Configured project TypeScript settings via `packages/solidity/aitbc-token/tsconfig.json` to align Hardhat, Node, and Mocha typings for the contract test suite.
|
||||
|
||||
## JavaScript SDK
|
||||
|
||||
- Delivered fetch-based client wrapper with TypeScript definitions and Vitest coverage under `packages/js/aitbc-sdk/`.
|
||||
|
||||
## Blockchain Node Enhancements
|
||||
|
||||
- Added comprehensive WebSocket tests for blocks and transactions streams including multi-subscriber and high-volume scenarios.
|
||||
- Extended PoA consensus with per-proposer block metrics and rotation tracking.
|
||||
- Added latest block interval gauge and RPC error spike alerting.
|
||||
- Enhanced observability with Grafana dashboards for blockchain node and coordinator overview.
|
||||
- Implemented marketplace endpoints in coordinator API with explorer and marketplace routers.
|
||||
- Added mock coordinator integration with enhanced telemetry capabilities.
|
||||
- Created comprehensive observability documentation and alerting rules.
|
||||
|
||||
## Explorer Web Production Readiness
|
||||
|
||||
- Implemented Playwright end-to-end tests for live mode functionality.
|
||||
- Enhanced responsive design with improved CSS layout system.
|
||||
- Added comprehensive error handling and fallback mechanisms for live API responses.
|
||||
- Integrated live coordinator endpoints with proper data reconciliation.
|
||||
|
||||
## Marketplace Web Launch
|
||||
|
||||
- Completed auth/session scaffolding for marketplace actions.
|
||||
- Implemented API abstraction layer with mock/live mode toggle.
|
||||
- Connected mock listings and bids to coordinator data sources.
|
||||
- Added feature flags for controlled live mode rollout.
|
||||
|
||||
## Cross-Chain Settlement
|
||||
|
||||
- Implemented cross-chain settlement hooks with external bridges.
|
||||
- Created BridgeAdapter interface for LayerZero integration.
|
||||
- Implemented BridgeManager for orchestration and retry logic.
|
||||
- Added settlement storage and API endpoints.
|
||||
- Created cross-chain settlement documentation.
|
||||
|
||||
## Python SDK Transport Abstraction
|
||||
|
||||
- Designed pluggable transport abstraction layer for multi-network support.
|
||||
- Implemented base Transport interface with HTTP/WebSocket transports.
|
||||
- Created MultiNetworkClient for managing multiple blockchain networks.
|
||||
- Updated AITBCClient to use transport abstraction with backward compatibility.
|
||||
- Added transport documentation and examples.
|
||||
|
||||
## GPU Service Provider Configuration
|
||||
|
||||
- Extended Miner model to include service configurations.
|
||||
- Created service configuration API endpoints in pool-hub.
|
||||
- Built HTML/JS UI for service provider configuration.
|
||||
- Added service pricing configuration and capability validation.
|
||||
- Implemented service selection for GPU providers.
|
||||
|
||||
## GPU Service Expansion
|
||||
|
||||
- Implemented dynamic service registry framework for 30+ GPU services.
|
||||
- Created service definitions for 6 categories: AI/ML, Media Processing, Scientific Computing, Data Analytics, Gaming, Development Tools.
|
||||
- Built comprehensive service registry API with validation and discovery.
|
||||
- Added hardware requirement checking and pricing models.
|
||||
- Updated roadmap with service expansion phase documentation.
|
||||
|
||||
## Stage 7 - GPU Service Expansion & Privacy Features
|
||||
|
||||
### GPU Service Infrastructure
|
||||
- Create dynamic service registry with JSON schema validation
|
||||
- Implement service provider configuration UI with dynamic service selection
|
||||
- Create service definitions for AI/ML (LLM inference, image/video generation, speech recognition, computer vision, recommendation systems)
|
||||
- Create service definitions for Media Processing (video transcoding, streaming, 3D rendering, image/audio processing)
|
||||
- Create service definitions for Scientific Computing (molecular dynamics, weather modeling, financial modeling, physics simulation, bioinformatics)
|
||||
- Create service definitions for Data Analytics (big data processing, real-time analytics, graph analytics, time series analysis)
|
||||
- Create service definitions for Gaming & Entertainment (cloud gaming, asset baking, physics simulation, VR/AR rendering)
|
||||
- Create service definitions for Development Tools (GPU compilation, model training, data processing, simulation testing, code generation)
|
||||
- Implement service-specific validation and hardware requirement checking
|
||||
|
||||
### Privacy & Cryptography Features
|
||||
- ✅ Research zk-proof-based receipt attestation and prototype a privacy-preserving settlement flow
|
||||
- ✅ Implement Groth16 ZK circuit for receipt hash preimage proofs
|
||||
- ✅ Create ZK proof generation service in coordinator API
|
||||
- ✅ Implement on-chain verification contract (ZKReceiptVerifier.sol)
|
||||
- ✅ Add confidential transaction support with opt-in ciphertext storage
|
||||
- ✅ Implement HSM-backed key management (Azure Key Vault, AWS KMS, Software)
|
||||
- ✅ Create hybrid encryption system (AES-256-GCM + X25519)
|
||||
- ✅ Implement role-based access control with time restrictions
|
||||
- ✅ Create tamper-evident audit logging with chain of hashes
|
||||
- ✅ Publish comprehensive threat modeling with STRIDE analysis
|
||||
- ✅ Update cross-chain settlement hooks for ZK proofs and privacy levels
|
||||
|
||||
### Enterprise Integration Features
|
||||
- ✅ Deliver reference connectors for ERP/payment systems with Python SDK
|
||||
- ✅ Implement Stripe payment connector with full charge/refund/subscription support
|
||||
- ✅ Create enterprise-grade Python SDK with async support, dependency injection, metrics
|
||||
- ✅ Build ERP connector base classes with plugin architecture for protocols
|
||||
- ✅ Document comprehensive SLAs with uptime guarantees and support commitments
|
||||
- ✅ Stand up multi-tenant coordinator infrastructure with per-tenant isolation
|
||||
- ✅ Implement tenant management service with lifecycle operations
|
||||
- ✅ Create tenant context middleware for automatic tenant identification
|
||||
- ✅ Build resource quota enforcement with Redis-backed caching
|
||||
- ✅ Create usage tracking and billing metrics with tiered pricing
|
||||
- ✅ Launch ecosystem certification program with SDK conformance testing
|
||||
- ✅ Define Bronze/Silver/Gold certification tiers with clear requirements
|
||||
- ✅ Build language-agnostic test suite with OpenAPI contract validation
|
||||
- ✅ Implement security validation framework with dependency scanning
|
||||
- ✅ Design public registry API for partner/SDK discovery
|
||||
- ✅ Validate certification system with Stripe connector certification
|
||||
|
||||
### Community & Governance Features
|
||||
- ✅ Establish open RFC process with clear stages and review criteria
|
||||
- ✅ Create governance website with documentation and navigation
|
||||
- ✅ Set up community call schedule with multiple call types
|
||||
- ✅ Design RFC template and GitHub PR template for submissions
|
||||
- ✅ Implement benevolent dictator model with sunset clause
|
||||
- ✅ Create hybrid governance structure (GitHub + Discord + Website)
|
||||
- ✅ Document participation guidelines and code of conduct
|
||||
- ✅ Establish transparency and accountability processes
|
||||
|
||||
### Ecosystem Growth Initiatives
|
||||
- ✅ Create hackathon organization framework with quarterly themes and bounty board
|
||||
- ✅ Design grant program with hybrid approach (micro-grants + strategic grants)
|
||||
- ✅ Build marketplace extension SDK with cookiecutter templates
|
||||
- ✅ Create analytics tooling for ecosystem metrics and KPI tracking
|
||||
- ✅ Track ecosystem KPIs (active marketplaces, cross-chain volume) and feed them into quarterly strategy reviews
|
||||
- ✅ Establish judging criteria with ecosystem impact weighting
|
||||
- ✅ Create sponsor partnership framework with tiered benefits
|
||||
- ✅ Design retroactive grants for proven projects
|
||||
- ✅ Implement milestone-based disbursement for accountability
|
||||
|
||||
### Stage 8 - Frontier R&D & Global Expansion
|
||||
- ✅ Launch research consortium framework with governance model and membership tiers
|
||||
- ✅ Develop hybrid PoA/PoS consensus research plan with 12-month implementation timeline
|
||||
- ✅ Create scaling research plan for sharding and rollups (100K+ TPS target)
|
||||
- ✅ Design ZK applications research plan for privacy-preserving AI
|
||||
- ✅ Create governance research plan with liquid democracy and AI assistance
|
||||
- ✅ Develop economic models research plan with sustainable tokenomics
|
||||
- ✅ Implement hybrid consensus prototype demonstrating dynamic mode switching
|
||||
- ✅ Create executive summary for consortium recruitment
|
||||
- ✅ Prototype sharding architecture with beacon chain coordination
|
||||
- ✅ Implement ZK-rollup prototype for transaction batching
|
||||
- ⏳ Set up consortium legal structure and operational infrastructure
|
||||
- ⏳ Recruit founding members from industry and academia
|
||||
230
docs/reference/enterprise-sla.md
Normal file
230
docs/reference/enterprise-sla.md
Normal file
@ -0,0 +1,230 @@
|
||||
# AITBC Enterprise Integration SLA
|
||||
|
||||
## Overview
|
||||
|
||||
This document outlines the Service Level Agreement (SLA) for enterprise integrations with the AITBC network, including uptime guarantees, performance expectations, and support commitments.
|
||||
|
||||
## Document Version
|
||||
- Version: 1.0
|
||||
- Date: December 2024
|
||||
- Effective Date: January 1, 2025
|
||||
|
||||
## Service Availability
|
||||
|
||||
### Coordinator API
|
||||
- **Uptime Guarantee**: 99.9% monthly (excluding scheduled maintenance)
|
||||
- **Scheduled Maintenance**: Maximum 4 hours per month, announced 72 hours in advance
|
||||
- **Emergency Maintenance**: Maximum 2 hours per month, announced 2 hours in advance
|
||||
|
||||
### Mining Pool Network
|
||||
- **Network Uptime**: 99.5% monthly
|
||||
- **Minimum Active Miners**: 1000 miners globally distributed
|
||||
- **Geographic Distribution**: Minimum 3 continents, 5 countries
|
||||
|
||||
### Settlement Layer
|
||||
- **Confirmation Time**: 95% of transactions confirmed within 30 seconds
|
||||
- **Cross-Chain Bridge**: 99% availability for supported chains
|
||||
- **Finality**: 99.9% of transactions final after 2 confirmations
|
||||
|
||||
## Performance Metrics
|
||||
|
||||
### API Response Times
|
||||
| Endpoint | 50th Percentile | 95th Percentile | 99th Percentile |
|
||||
|----------|-----------------|-----------------|-----------------|
|
||||
| Job Submission | 50ms | 100ms | 200ms |
|
||||
| Job Status | 25ms | 50ms | 100ms |
|
||||
| Receipt Verification | 100ms | 200ms | 500ms |
|
||||
| Settlement Initiation | 150ms | 300ms | 1000ms |
|
||||
|
||||
### Throughput Limits
|
||||
| Service | Rate Limit | Burst Limit |
|
||||
|---------|------------|------------|
|
||||
| Job Submission | 1000/minute | 100/minute |
|
||||
| API Calls | 10,000/minute | 1000/minute |
|
||||
| Webhook Events | 5000/minute | 500/minute |
|
||||
|
||||
### Data Processing
|
||||
- **Proof Generation**: Average 2 seconds, 95% under 5 seconds
|
||||
- **ZK Verification**: Average 100ms, 95% under 200ms
|
||||
- **Encryption/Decryption**: Average 50ms, 95% under 100ms
|
||||
|
||||
## Support Services
|
||||
|
||||
### Support Tiers
|
||||
| Tier | Response Time | Availability | Escalation |
|
||||
|------|---------------|--------------|------------|
|
||||
| Enterprise | 1 hour (P1), 4 hours (P2), 24 hours (P3) | 24x7x365 | Direct to engineering |
|
||||
| Business | 4 hours (P1), 24 hours (P2), 48 hours (P3) | Business hours | Technical lead |
|
||||
| Developer | 24 hours (P1), 72 hours (P2), 5 days (P3) | Business hours | Support team |
|
||||
|
||||
### Incident Management
|
||||
- **P1 - Critical**: System down, data loss, security breach
|
||||
- **P2 - High**: Significant feature degradation, performance impact
|
||||
- **P3 - Medium**: Feature not working, documentation issues
|
||||
- **P4 - Low**: General questions, enhancement requests
|
||||
|
||||
### Maintenance Windows
|
||||
- **Regular Maintenance**: Every Sunday 02:00-04:00 UTC
|
||||
- **Security Updates**: As needed, minimum 24 hours notice
|
||||
- **Major Upgrades**: Quarterly, minimum 30 days notice
|
||||
|
||||
## Data Management
|
||||
|
||||
### Data Retention
|
||||
| Data Type | Retention Period | Archival |
|
||||
|-----------|------------------|----------|
|
||||
| Transaction Records | 7 years | Yes |
|
||||
| Audit Logs | 7 years | Yes |
|
||||
| Performance Metrics | 2 years | Yes |
|
||||
| Error Logs | 90 days | No |
|
||||
| Debug Logs | 30 days | No |
|
||||
|
||||
### Data Availability
|
||||
- **Backup Frequency**: Every 15 minutes
|
||||
- **Recovery Point Objective (RPO)**: 15 minutes
|
||||
- **Recovery Time Objective (RTO)**: 4 hours
|
||||
- **Geographic Redundancy**: 3 regions, cross-replicated
|
||||
|
||||
### Privacy and Compliance
|
||||
- **GDPR Compliant**: Yes
|
||||
- **Data Processing Agreement**: Available
|
||||
- **Privacy Impact Assessment**: Completed
|
||||
- **Certifications**: ISO 27001, SOC 2 Type II
|
||||
|
||||
## Integration SLAs
|
||||
|
||||
### ERP Connectors
|
||||
| Metric | Target |
|
||||
|--------|--------|
|
||||
| Sync Latency | < 5 minutes |
|
||||
| Data Accuracy | 99.99% |
|
||||
| Error Rate | < 0.1% |
|
||||
| Retry Success Rate | > 99% |
|
||||
|
||||
### Payment Processors
|
||||
| Metric | Target |
|
||||
|--------|--------|
|
||||
| Settlement Time | < 2 minutes |
|
||||
| Success Rate | 99.9% |
|
||||
| Fraud Detection | < 0.01% false positive |
|
||||
| Chargeback Handling | 24 hours |
|
||||
|
||||
### Webhook Delivery
|
||||
- **Delivery Guarantee**: 99.5% successful delivery
|
||||
- **Retry Policy**: Exponential backoff, max 10 attempts
|
||||
- **Timeout**: 30 seconds per attempt
|
||||
- **Verification**: HMAC-SHA256 signatures
|
||||
|
||||
## Security Commitments
|
||||
|
||||
### Availability
|
||||
- **DDoS Protection**: 99.9% mitigation success
|
||||
- **Incident Response**: < 1 hour detection, < 4 hours containment
|
||||
- **Vulnerability Patching**: Critical patches within 24 hours
|
||||
|
||||
### Encryption Standards
|
||||
- **In Transit**: TLS 1.3 minimum
|
||||
- **At Rest**: AES-256 encryption
|
||||
- **Key Management**: HSM-backed, regular rotation
|
||||
- **Compliance**: FIPS 140-2 Level 3
|
||||
|
||||
## Penalties and Credits
|
||||
|
||||
### Service Credits
|
||||
| Downtime | Credit Percentage |
|
||||
|----------|------------------|
|
||||
| < 99.9% uptime | 10% |
|
||||
| < 99.5% uptime | 25% |
|
||||
| < 99.0% uptime | 50% |
|
||||
| < 98.0% uptime | 100% |
|
||||
|
||||
### Performance Credits
|
||||
| Metric Miss | Credit |
|
||||
|-------------|--------|
|
||||
| Response time > 95th percentile | 5% |
|
||||
| Throughput limit exceeded | 10% |
|
||||
| Data loss > RPO | 100% |
|
||||
|
||||
### Claim Process
|
||||
1. Submit ticket within 30 days of incident
|
||||
2. Provide evidence of SLA breach
|
||||
3. Review within 5 business days
|
||||
4. Credit applied to next invoice
|
||||
|
||||
## Exclusions
|
||||
|
||||
### Force Majeure
|
||||
- Natural disasters
|
||||
- War, terrorism, civil unrest
|
||||
- Government actions
|
||||
- Internet outages beyond control
|
||||
|
||||
### Customer Responsibilities
|
||||
- Proper API implementation
|
||||
- Adequate error handling
|
||||
- Rate limit compliance
|
||||
- Security best practices
|
||||
|
||||
### Third-Party Dependencies
|
||||
- External payment processors
|
||||
- Cloud provider outages
|
||||
- Blockchain network congestion
|
||||
- DNS issues
|
||||
|
||||
## Monitoring and Reporting
|
||||
|
||||
### Available Metrics
|
||||
- Real-time dashboard
|
||||
- Historical reports (24 months)
|
||||
- API usage analytics
|
||||
- Performance benchmarks
|
||||
|
||||
### Custom Reports
|
||||
- Monthly SLA reports
|
||||
- Quarterly business reviews
|
||||
- Annual security assessments
|
||||
- Custom KPI tracking
|
||||
|
||||
### Alerting
|
||||
- Email notifications
|
||||
- SMS for critical issues
|
||||
- Webhook callbacks
|
||||
- Slack integration
|
||||
|
||||
## Contact Information
|
||||
|
||||
### Support
|
||||
- **Enterprise Support**: enterprise@aitbc.io
|
||||
- **Technical Support**: support@aitbc.io
|
||||
- **Security Issues**: security@aitbc.io
|
||||
- **Emergency Hotline**: +1-555-SECURITY
|
||||
|
||||
### Account Management
|
||||
- **Enterprise Customers**: account@aitbc.io
|
||||
- **Partners**: partners@aitbc.io
|
||||
- **Billing**: billing@aitbc.io
|
||||
|
||||
## Definitions
|
||||
|
||||
### Terms
|
||||
- **Uptime**: Percentage of time services are available and functional
|
||||
- **Response Time**: Time from request receipt to first byte of response
|
||||
- **Throughput**: Number of requests processed per time unit
|
||||
- **Error Rate**: Percentage of requests resulting in errors
|
||||
|
||||
### Calculations
|
||||
- Monthly uptime calculated as (total minutes - downtime) / total minutes
|
||||
- Percentiles measured over trailing 30-day period
|
||||
- Credits calculated on monthly service fees
|
||||
|
||||
## Amendments
|
||||
|
||||
This SLA may be amended with:
|
||||
- 30 days written notice for non-material changes
|
||||
- 90 days written notice for material changes
|
||||
- Mutual agreement for custom terms
|
||||
- Immediate notice for security updates
|
||||
|
||||
---
|
||||
|
||||
*This SLA is part of the Enterprise Integration Agreement and is subject to the terms and conditions therein.*
|
||||
45
docs/reference/index.md
Normal file
45
docs/reference/index.md
Normal file
@ -0,0 +1,45 @@
|
||||
# AITBC Reference Documentation
|
||||
|
||||
Welcome to the AITBC reference documentation. This section contains technical specifications, architecture details, and historical documentation.
|
||||
|
||||
## Architecture & Design
|
||||
|
||||
- [Architecture Overview](architecture/) - System architecture documentation
|
||||
- [Cross-Chain Settlement](architecture/cross-chain-settlement-design.md) - Cross-chain settlement design
|
||||
- [Python SDK Transport](architecture/python-sdk-transport-design.md) - Transport abstraction design
|
||||
|
||||
## Bootstrap Specifications
|
||||
|
||||
- [Bootstrap Directory](bootstrap/dirs.md) - Original directory structure
|
||||
- [Technical Plan](bootstrap/aitbc_tech_plan.md) - Original technical specification
|
||||
- [Component Specs](bootstrap/) - Individual component specifications
|
||||
|
||||
## Cryptography & Privacy
|
||||
|
||||
- [ZK Receipt Attestation](zk-receipt-attestation.md) - Zero-knowledge proof implementation
|
||||
- [ZK Implementation Summary](zk-implementation-summary.md) - ZK implementation overview
|
||||
- [ZK Technology Comparison](zk-technology-comparison.md) - ZK technology comparison
|
||||
- [Confidential Transactions](confidential-transactions.md) - Confidential transaction implementation
|
||||
- [Confidential Implementation Summary](confidential-implementation-summary.md) - Implementation summary
|
||||
- [Threat Modeling](threat-modeling.md) - Security threat modeling
|
||||
|
||||
## Enterprise Features
|
||||
|
||||
- [Enterprise SLA](enterprise-sla.md) - Service level agreements
|
||||
- [Multi-tenancy](multi-tenancy.md) - Multi-tenant infrastructure
|
||||
- [HSM Integration](hsm-integration.md) - Hardware security module integration
|
||||
|
||||
## Project Documentation
|
||||
|
||||
- [Roadmap](roadmap.md) - Development roadmap
|
||||
- [Completed Tasks](done.md) - List of completed features
|
||||
- [Beta Release Plan](beta-release-plan.md) - Beta release planning
|
||||
|
||||
## Historical
|
||||
|
||||
- [Component Documentation](../coordinator_api.md) - Historical component docs
|
||||
- [Bootstrap Archive](bootstrap/) - Original bootstrap documentation
|
||||
|
||||
## Glossary
|
||||
|
||||
- [Terms](glossary.md) - AITBC terminology and definitions
|
||||
236
docs/reference/roadmap.md
Normal file
236
docs/reference/roadmap.md
Normal file
@ -0,0 +1,236 @@
|
||||
# AITBC Development Roadmap
|
||||
|
||||
This roadmap aggregates high-priority tasks derived from the bootstrap specifications in `docs/bootstrap/` and tracks progress across the monorepo. Update this document as milestones evolve.
|
||||
|
||||
## Stage 1 — Upcoming Focus Areas
|
||||
|
||||
- **Blockchain Node Foundations**
|
||||
- ✅ Bootstrap module layout in `apps/blockchain-node/src/`.
|
||||
- ✅ Implement SQLModel schemas and RPC stubs aligned with historical/attested receipts.
|
||||
|
||||
- **Explorer Web Enablement**
|
||||
- ✅ Finish mock integration across all pages and polish styling + mock/live toggle.
|
||||
- ✅ Begin wiring coordinator endpoints (e.g., `/v1/jobs/{job_id}/receipts`).
|
||||
|
||||
- **Marketplace Web Scaffolding**
|
||||
- ✅ Scaffold Vite/vanilla frontends consuming coordinator receipt history endpoints and SDK examples.
|
||||
|
||||
- **Pool Hub Services**
|
||||
- ✅ Initialize FastAPI project, scoring registry, and telemetry ingestion hooks leveraging coordinator/miner metrics.
|
||||
|
||||
- **CI Enhancements**
|
||||
- ✅ Add blockchain-node tests once available and frontend build/lint checks to `.github/workflows/python-tests.yml` or follow-on workflows.
|
||||
- ✅ Provide systemd unit + installer scripts under `scripts/` for streamlined deployment.
|
||||
|
||||
## Stage 2 — Core Services (MVP)
|
||||
|
||||
- **Coordinator API**
|
||||
- ✅ Scaffold FastAPI project (`apps/coordinator-api/src/app/`).
|
||||
- ✅ Implement job submission, status, result endpoints.
|
||||
- ✅ Add miner registration, heartbeat, poll, result routes.
|
||||
- ✅ Wire SQLite persistence for jobs, miners, receipts (historical `JobReceipt` table).
|
||||
- ✅ Provide `.env.example`, `pyproject.toml`, and run scripts.
|
||||
|
||||
- **Miner Node**
|
||||
- ✅ Implement capability probe and control loop (register → heartbeat → fetch jobs).
|
||||
- ✅ Build CLI and Python runners with sandboxed work dirs (result reporting stubbed to coordinator).
|
||||
|
||||
- **Blockchain Node**
|
||||
- ✅ Define SQLModel schema for blocks, transactions, accounts, receipts (`apps/blockchain-node/src/aitbc_chain/models.py`).
|
||||
- ✅ Harden schema parity across runtime + storage:
|
||||
- Alembic baseline + follow-on migrations in `apps/blockchain-node/migrations/` now track the SQLModel schema (blocks, transactions, receipts, accounts).
|
||||
- Added `Relationship` + `ForeignKey` wiring in `apps/blockchain-node/src/aitbc_chain/models.py` for block ↔ transaction ↔ receipt joins.
|
||||
- Introduced hex/enum validation hooks via Pydantic validators to ensure hash integrity and safe persistence.
|
||||
- ✅ Implement PoA proposer loop with block assembly (`apps/blockchain-node/src/aitbc_chain/consensus/poa.py`).
|
||||
- ✅ Expose REST RPC endpoints for tx submission, balances, receipts (`apps/blockchain-node/src/aitbc_chain/rpc/router.py`).
|
||||
- ✅ Deliver WebSocket RPC + P2P gossip layer:
|
||||
- ✅ Stand up WebSocket subscription endpoints (`apps/blockchain-node/src/aitbc_chain/rpc/websocket.py`) mirroring REST payloads.
|
||||
- ✅ Implement pub/sub transport for block + transaction gossip backed by an in-memory broker (Starlette `Broadcast` or Redis) with configurable fan-out.
|
||||
- ✅ Add integration tests and load-test harness ensuring gossip convergence and back-pressure handling.
|
||||
- ✅ Ship devnet scripts (`apps/blockchain-node/scripts/`).
|
||||
- ✅ Add observability hooks (JSON logging, Prometheus metrics) and integrate coordinator mock into devnet tooling.
|
||||
- ✅ Expand observability dashboards + miner mock integration:
|
||||
- Build Grafana dashboards for consensus health (block intervals, proposer rotation) and RPC latency (`apps/blockchain-node/observability/`).
|
||||
- Expose miner mock telemetry (job throughput, error rates) via shared Prometheus registry and ingest into blockchain-node dashboards.
|
||||
- Add alerting rules (Prometheus `Alertmanager`) for stalled proposers, queue saturation, and miner mock disconnects.
|
||||
- Wire coordinator mock into devnet tooling to simulate real-world load and validate observability hooks.
|
||||
|
||||
- **Receipt Schema**
|
||||
- ✅ Finalize canonical JSON receipt format under `protocols/receipts/` (includes sample signed receipts).
|
||||
- ✅ Implement signing/verification helpers in `packages/py/aitbc-crypto` (JS SDK pending).
|
||||
- ✅ Translate `docs/bootstrap/aitbc_tech_plan.md` contract skeleton into Solidity project (`packages/solidity/aitbc-token/`).
|
||||
- ✅ Add deployment/test scripts and document minting flow (`packages/solidity/aitbc-token/scripts/` and `docs/run.md`).
|
||||
|
||||
- **Wallet Daemon**
|
||||
- ✅ Implement encrypted keystore (Argon2id + XChaCha20-Poly1305) via `KeystoreService`.
|
||||
- ✅ Provide REST and JSON-RPC endpoints for wallet management and signing (`api_rest.py`, `api_jsonrpc.py`).
|
||||
- ✅ Add mock ledger adapter with SQLite backend powering event history (`ledger_mock/`).
|
||||
- ✅ Integrate Python receipt verification helpers (`aitbc_sdk`) and expose API/service utilities validating miner + coordinator signatures.
|
||||
- ✅ Harden REST API workflows (create/list/unlock/sign) with structured password policy enforcement and deterministic pytest coverage in `apps/wallet-daemon/tests/test_wallet_api.py`.
|
||||
- ✅ Implement Wallet SDK receipt ingestion + attestation surfacing:
|
||||
- Added `/v1/jobs/{job_id}/receipts` client helpers with cursor pagination, retry/backoff, and summary reporting (`packages/py/aitbc-sdk/src/receipts.py`).
|
||||
- Reused crypto helpers to validate miner and coordinator signatures, capturing per-key failure reasons for downstream UX.
|
||||
- Surfaced aggregated attestation status (`ReceiptStatus`) and failure diagnostics for SDK + UI consumers; JS helper parity still planned.
|
||||
|
||||
## Stage 3 — Pool Hub & Marketplace
|
||||
|
||||
- **Pool Hub**
|
||||
- ✅ Implement miner registry, scoring engine, and `/v1/match` API with Redis/PostgreSQL backing stores.
|
||||
- ✅ Add observability endpoints (`/v1/health`, `/v1/metrics`) plus Prometheus instrumentation and integration tests.
|
||||
|
||||
- **Marketplace Web**
|
||||
- ✅ Initialize Vite project with vanilla TypeScript (`apps/marketplace-web/`).
|
||||
- ✅ Build offer list, bid form, and stats cards powered by mock data fixtures (`public/mock/`).
|
||||
- ✅ Provide API abstraction toggling mock/live mode (`src/lib/api.ts`) and wire coordinator endpoints.
|
||||
- ✅ Validate live mode against coordinator `/v1/marketplace/*` responses and add auth feature flags for rollout.
|
||||
|
||||
- **Explorer Web**
|
||||
- ✅ Initialize Vite + TypeScript project scaffold (`apps/explorer-web/`).
|
||||
- ✅ Add routed pages for overview, blocks, transactions, addresses, receipts.
|
||||
- ✅ Seed mock datasets (`public/mock/`) and fetch helpers powering overview + blocks tables.
|
||||
- ✅ Extend mock integrations to transactions, addresses, and receipts pages.
|
||||
- ✅ Implement styling system, mock/live data toggle, and coordinator API wiring scaffold.
|
||||
- ✅ Render overview stats from mock block/transaction/receipt summaries with graceful empty-state fallbacks.
|
||||
- ✅ Validate live mode + responsive polish:
|
||||
- Hit live coordinator endpoints (`/v1/blocks`, `/v1/transactions`, `/v1/addresses`, `/v1/receipts`) via `getDataMode() === "live"` and reconcile payloads with UI models.
|
||||
- Add fallbacks + error surfacing for partial/failed live responses (toast + console diagnostics).
|
||||
- Audit responsive breakpoints (`public/css/layout.css`) and adjust grid/typography for tablet + mobile; add regression checks in Percy/Playwright snapshots.
|
||||
|
||||
## Stage 4 — Observability & Production Polish
|
||||
|
||||
- **Observability & Telemetry**
|
||||
- ✅ Build Grafana dashboards for PoA consensus health (block intervals, proposer rotation cadence) leveraging `poa_last_block_interval_seconds`, `poa_proposer_rotations_total`, and per-proposer counters.
|
||||
- ✅ Surface RPC latency histograms/summaries for critical endpoints (`rpc_get_head`, `rpc_send_tx`, `rpc_submit_receipt`) and add Grafana panels with SLO thresholds.
|
||||
- ✅ Ingest miner mock telemetry (job throughput, failure rate) into the shared Prometheus registry and wire panels/alerts that correlate miner health with consensus metrics.
|
||||
|
||||
- **Explorer Web (Live Mode)**
|
||||
- ✅ Finalize live `getDataMode() === "live"` workflow: align API payload contracts, render loading/error states, and persist mock/live toggle preference.
|
||||
- ✅ Expand responsive testing (tablet/mobile) and add automated visual regression snapshots prior to launch.
|
||||
- ✅ Integrate Playwright smoke tests covering overview, blocks, and transactions pages in live mode.
|
||||
|
||||
- **Marketplace Web (Launch Readiness)**
|
||||
- ✅ Connect mock listings/bids to coordinator data sources and provide feature flags for live mode rollout.
|
||||
- ✅ Implement auth/session scaffolding for marketplace actions and document API assumptions in `apps/marketplace-web/README.md`.
|
||||
- ✅ Add Grafana panels monitoring marketplace API throughput and error rates once endpoints are live.
|
||||
|
||||
- **Operational Hardening**
|
||||
- ✅ Extend Alertmanager rules to cover RPC error spikes, proposer stalls, and miner disconnects using the new metrics.
|
||||
- ✅ Document dashboard import + alert deployment steps in `docs/run.md` for operators.
|
||||
- ✅ Prepare Stage 3 release checklist linking dashboards, alerts, and smoke tests prior to production cutover.
|
||||
|
||||
## Stage 5 — Scaling & Release Readiness
|
||||
|
||||
- **Infrastructure Scaling**
|
||||
- ✅ Benchmark blockchain node throughput under sustained load; capture CPU/memory targets and suggest horizontal scaling thresholds.
|
||||
- ✅ Build Terraform/Helm templates for dev/staging/prod environments, including Prometheus/Grafana bundles.
|
||||
- ✅ Implement autoscaling policies for coordinator, miners, and marketplace services with synthetic traffic tests.
|
||||
|
||||
- **Reliability & Compliance**
|
||||
- ✅ Formalize backup/restore procedures for PostgreSQL, Redis, and ledger storage with scheduled jobs.
|
||||
- ✅ Complete security hardening review (TLS termination, API auth, secrets management) and document mitigations in `docs/security.md`.
|
||||
- ✅ Add chaos testing scripts (network partition, coordinator outage) and track mean-time-to-recovery metrics.
|
||||
|
||||
- **Product Launch Checklist**
|
||||
- ✅ Finalize public documentation (API references, onboarding guides) and publish to the docs portal.
|
||||
- ✅ Coordinate beta release timeline, including user acceptance testing of explorer/marketplace live modes.
|
||||
- ✅ Establish post-launch monitoring playbooks and on-call rotations.
|
||||
|
||||
## Stage 6 — Ecosystem Expansion
|
||||
|
||||
- **Cross-Chain & Interop**
|
||||
- ✅ Prototype cross-chain settlement hooks leveraging external bridges; document integration patterns.
|
||||
- ✅ Extend SDKs (Python/JS) with pluggable transport abstractions for multi-network support.
|
||||
- ⏳ Evaluate third-party explorer/analytics integrations and publish partner onboarding guides.
|
||||
|
||||
- **Marketplace Growth**
|
||||
- ⏳ Launch incentive programs (staking, liquidity mining) and expose telemetry dashboards tracking campaign performance.
|
||||
- ⏳ Implement governance module (proposal voting, parameter changes) and add API/UX flows to explorer/marketplace.
|
||||
- ⏳ Provide SLA-backed coordinator/pool hubs with capacity planning and billing instrumentation.
|
||||
|
||||
- **Developer Experience**
|
||||
- ⏳ Publish advanced tutorials (custom proposers, marketplace extensions) and maintain versioned API docs.
|
||||
- ⏳ Integrate CI/CD pipelines with canary deployments and blue/green release automation.
|
||||
- ⏳ Host quarterly architecture reviews capturing lessons learned and feeding into roadmap revisions.
|
||||
|
||||
## Stage 7 — Innovation & Ecosystem Services
|
||||
|
||||
- **GPU Service Expansion**
|
||||
- ✅ Implement dynamic service registry framework for 30+ GPU-accelerated services
|
||||
- ✅ Create service definitions for AI/ML (LLM inference, image/video generation, speech recognition, computer vision, recommendation systems)
|
||||
- ✅ Create service definitions for Media Processing (video transcoding, streaming, 3D rendering, image/audio processing)
|
||||
- ✅ Create service definitions for Scientific Computing (molecular dynamics, weather modeling, financial modeling, physics simulation, bioinformatics)
|
||||
- ✅ Create service definitions for Data Analytics (big data processing, real-time analytics, graph analytics, time series analysis)
|
||||
- ✅ Create service definitions for Gaming & Entertainment (cloud gaming, asset baking, physics simulation, VR/AR rendering)
|
||||
- ✅ Create service definitions for Development Tools (GPU compilation, model training, data processing, simulation testing, code generation)
|
||||
- ✅ Deploy service provider configuration UI with dynamic service selection
|
||||
- ✅ Implement service-specific validation and hardware requirement checking
|
||||
|
||||
- **Advanced Cryptography & Privacy**
|
||||
- ✅ Research zk-proof-based receipt attestation and prototype a privacy-preserving settlement flow.
|
||||
- ✅ Add confidential transaction support with opt-in ciphertext storage and HSM-backed key management.
|
||||
- ✅ Publish threat modeling updates and share mitigations with ecosystem partners.
|
||||
|
||||
- **Enterprise Integrations**
|
||||
- ✅ Deliver reference connectors for ERP/payment systems and document SLA expectations.
|
||||
- ✅ Stand up multi-tenant coordinator infrastructure with per-tenant isolation and billing metrics.
|
||||
- ✅ Launch ecosystem certification program (SDK conformance, security best practices) with public registry.
|
||||
|
||||
- **Community & Governance**
|
||||
- ✅ Establish open RFC process, publish governance website, and schedule regular community calls.
|
||||
- ✅ Sponsor hackathons/accelerators and provide grants for marketplace extensions and analytics tooling.
|
||||
- ✅ Track ecosystem KPIs (active marketplaces, cross-chain volume) and feed them into quarterly strategy reviews.
|
||||
|
||||
## Stage 8 — Frontier R&D & Global Expansion
|
||||
|
||||
- **Protocol Evolution**
|
||||
- ✅ Launch research consortium exploring next-gen consensus (hybrid PoA/PoS) and finalize whitepapers.
|
||||
- ⏳ Prototype sharding or rollup architectures to scale throughput beyond current limits.
|
||||
- ⏳ Standardize interoperability specs with industry bodies and submit proposals for adoption.
|
||||
|
||||
- **Global Rollout**
|
||||
- ⏳ Establish regional infrastructure hubs (multi-cloud) with localized compliance and data residency guarantees.
|
||||
- ⏳ Partner with regulators/enterprises to pilot regulated marketplaces and publish compliance playbooks.
|
||||
- ⏳ Expand localization (UI, documentation, support) covering top target markets.
|
||||
|
||||
- **Long-Term Sustainability**
|
||||
- ⏳ Create sustainability fund for ecosystem maintenance, bug bounties, and community stewardship.
|
||||
- ⏳ Define succession planning for core teams, including training programs and contributor pathways.
|
||||
- ⏳ Publish bi-annual roadmap retrospectives assessing KPI alignment and revising long-term goals.
|
||||
|
||||
## Stage 9 — Moonshot Initiatives
|
||||
|
||||
- **Decentralized Infrastructure**
|
||||
- ⏳ Transition coordinator/miner roles toward community-governed validator sets with incentive alignment.
|
||||
- ⏳ Explore decentralized storage/backbone options (IPFS/Filecoin) for ledger and marketplace artifacts.
|
||||
- ⏳ Prototype fully trustless marketplace settlement leveraging zero-knowledge rollups.
|
||||
|
||||
- **AI & Automation**
|
||||
- ⏳ Integrate AI-driven monitoring/anomaly detection for proposer health, market liquidity, and fraud detection.
|
||||
- ⏳ Automate incident response playbooks with ChatOps and policy engines.
|
||||
- ⏳ Launch research into autonomous agent participation (AI agents bidding/offering in the marketplace) and governance implications.
|
||||
- **Global Standards Leadership**
|
||||
- ⏳ chair industry working groups defining receipt/marketplace interoperability standards.
|
||||
- ⏳ Publish annual transparency reports and sustainability metrics for stakeholders.
|
||||
- ⏳ Engage with academia and open-source foundations to steward long-term protocol evolution.
|
||||
|
||||
### Stage 10 — Stewardship & Legacy Planning
|
||||
|
||||
- **Open Governance Maturity**
|
||||
- ⏳ Transition roadmap ownership to community-elected councils with transparent voting and treasury controls.
|
||||
- ⏳ Codify constitutional documents (mission, values, conflict resolution) and publish public charters.
|
||||
- ⏳ Implement on-chain governance modules for protocol upgrades and ecosystem-wide decisions.
|
||||
|
||||
- **Educational & Outreach Programs**
|
||||
- ⏳ Fund university partnerships, research chairs, and developer fellowships focused on decentralized marketplace tech.
|
||||
- ⏳ Create certification tracks and mentorship programs for new validator/operators.
|
||||
- ⏳ Launch annual global summit and publish proceedings to share best practices across partners.
|
||||
|
||||
- **Long-Term Preservation**
|
||||
- ⏳ Archive protocol specs, governance records, and cultural artifacts in decentralized storage with redundancy.
|
||||
- ⏳ Establish legal/organizational frameworks to ensure continuity across jurisdictions.
|
||||
- ⏳ Develop end-of-life/transition plans for legacy components, documenting deprecation strategies and migration tooling.
|
||||
|
||||
|
||||
## Shared Libraries & Examples
|
||||
the canonical checklist during implementation. Mark completed tasks with ✅ and add dates or links to relevant PRs as development progresses.
|
||||
|
||||
286
docs/reference/threat-modeling.md
Normal file
286
docs/reference/threat-modeling.md
Normal file
@ -0,0 +1,286 @@
|
||||
# AITBC Threat Modeling: Privacy Features
|
||||
|
||||
## Overview
|
||||
|
||||
This document provides a comprehensive threat model for AITBC's privacy-preserving features, focusing on zero-knowledge receipt attestation and confidential transactions. The analysis uses the STRIDE methodology to systematically identify threats and their mitigations.
|
||||
|
||||
## Document Version
|
||||
- Version: 1.0
|
||||
- Date: December 2024
|
||||
- Status: Published - Shared with Ecosystem Partners
|
||||
|
||||
## Scope
|
||||
|
||||
### In-Scope Components
|
||||
1. **ZK Receipt Attestation System**
|
||||
- Groth16 circuit implementation
|
||||
- Proof generation service
|
||||
- Verification contract
|
||||
- Trusted setup ceremony
|
||||
|
||||
2. **Confidential Transaction System**
|
||||
- Hybrid encryption (AES-256-GCM + X25519)
|
||||
- HSM-backed key management
|
||||
- Access control system
|
||||
- Audit logging infrastructure
|
||||
|
||||
### Out-of-Scope Components
|
||||
- Core blockchain consensus
|
||||
- Basic transaction processing
|
||||
- Non-confidential marketplace operations
|
||||
- Network layer security
|
||||
|
||||
## Threat Actors
|
||||
|
||||
| Actor | Motivation | Capability | Impact |
|
||||
|-------|------------|------------|--------|
|
||||
| Malicious Miner | Financial gain, sabotage | Access to mining software, limited compute | High |
|
||||
| Compromised Coordinator | Data theft, market manipulation | System access, private keys | Critical |
|
||||
| External Attacker | Financial theft, privacy breach | Public network, potential exploits | High |
|
||||
| Regulator | Compliance investigation | Legal authority, subpoenas | Medium |
|
||||
| Insider Threat | Data exfiltration | Internal access, knowledge | High |
|
||||
| Quantum Computer | Break cryptography | Future quantum capability | Future |
|
||||
|
||||
## STRIDE Analysis
|
||||
|
||||
### 1. Spoofing
|
||||
|
||||
#### ZK Receipt Attestation
|
||||
| Threat | Description | Likelihood | Impact | Mitigations |
|
||||
|--------|-------------|------------|--------|-------------|
|
||||
| Proof Forgery | Attacker creates fake ZK proofs | Medium | High | ✅ Groth16 soundness property<br>✅ Verification on-chain<br>⚠️ Trusted setup security |
|
||||
| Identity Spoofing | Miner impersonates another | Low | Medium | ✅ Miner registration with KYC<br>✅ Cryptographic signatures |
|
||||
| Coordinator Impersonation | Fake coordinator services | Low | High | ✅ TLS certificates<br>⚠️ DNSSEC recommended |
|
||||
|
||||
#### Confidential Transactions
|
||||
| Threat | Description | Likelihood | Impact | Mitigations |
|
||||
|--------|-------------|------------|--------|-------------|
|
||||
| Key Spoofing | Fake public keys for participants | Medium | High | ✅ HSM-protected keys<br>✅ Certificate validation |
|
||||
| Authorization Forgery | Fake audit authorization | Low | High | ✅ Signed tokens<br>✅ Short expiration times |
|
||||
|
||||
### 2. Tampering
|
||||
|
||||
#### ZK Receipt Attestation
|
||||
| Threat | Description | Likelihood | Impact | Mitigations |
|
||||
|--------|-------------|------------|--------|-------------|
|
||||
| Circuit Modification | Malicious changes to circom circuit | Low | Critical | ✅ Open-source circuits<br>✅ Circuit hash verification |
|
||||
| Proof Manipulation | Altering proofs during transmission | Medium | High | ✅ End-to-end encryption<br>✅ On-chain verification |
|
||||
| Setup Parameter Poisoning | Compromise trusted setup | Low | Critical | ⚠️ Multi-party ceremony needed<br>⚠️ Secure destruction of toxic waste |
|
||||
|
||||
#### Confidential Transactions
|
||||
| Threat | Description | Likelihood | Impact | Mitigations |
|
||||
|--------|-------------|------------|--------|-------------|
|
||||
| Data Tampering | Modify encrypted transaction data | Medium | High | ✅ AES-GCM authenticity<br>✅ Immutable audit logs |
|
||||
| Key Substitution | Swap public keys in transit | Low | High | ✅ Certificate pinning<br>✅ HSM key validation |
|
||||
| Access Control Bypass | Override authorization checks | Low | High | ✅ Role-based access control<br>✅ Audit logging of all changes |
|
||||
|
||||
### 3. Repudiation
|
||||
|
||||
#### ZK Receipt Attestation
|
||||
| Threat | Description | Likelihood | Impact | Mitigations |
|
||||
|--------|-------------|------------|--------|-------------|
|
||||
| Denial of Proof Generation | Miner denies creating proof | Low | Medium | ✅ On-chain proof records<br>✅ Signed proof metadata |
|
||||
| Receipt Denial | Party denies transaction occurred | Medium | Medium | ✅ Immutable blockchain ledger<br>✅ Cryptographic receipts |
|
||||
|
||||
#### Confidential Transactions
|
||||
| Threat | Description | Likelihood | Impact | Mitigations |
|
||||
|--------|-------------|------------|--------|-------------|
|
||||
| Access Denial | User denies accessing data | Low | Medium | ✅ Comprehensive audit logs<br>✅ Non-repudiation signatures |
|
||||
| Key Generation Denial | Deny creating encryption keys | Low | Medium | ✅ HSM audit trails<br>✅ Key rotation logs |
|
||||
|
||||
### 4. Information Disclosure
|
||||
|
||||
#### ZK Receipt Attestation
|
||||
| Threat | Description | Likelihood | Impact | Mitigations |
|
||||
|--------|-------------|------------|--------|-------------|
|
||||
| Witness Extraction | Extract private inputs from proof | Low | Critical | ✅ Zero-knowledge property<br>✅ No knowledge of witness |
|
||||
| Setup Parameter Leak | Expose toxic waste from trusted setup | Low | Critical | ⚠️ Secure multi-party setup<br>⚠️ Parameter destruction |
|
||||
| Side-Channel Attacks | Timing/power analysis | Low | Medium | ✅ Constant-time implementations<br>⚠️ Needs hardware security review |
|
||||
|
||||
#### Confidential Transactions
|
||||
| Threat | Description | Likelihood | Impact | Mitigations |
|
||||
|--------|-------------|------------|--------|-------------|
|
||||
| Private Key Extraction | Steal keys from HSM | Low | Critical | ✅ HSM security controls<br>✅ Hardware tamper resistance |
|
||||
| Decryption Key Leak | Expose DEKs | Medium | High | ✅ Per-transaction DEKs<br>✅ Encrypted key storage |
|
||||
| Metadata Analysis | Infer data from access patterns | Medium | Medium | ✅ Access logging<br>⚠️ Differential privacy needed |
|
||||
|
||||
### 5. Denial of Service
|
||||
|
||||
#### ZK Receipt Attestation
|
||||
| Threat | Description | Likelihood | Impact | Mitigations |
|
||||
|--------|-------------|------------|--------|-------------|
|
||||
| Proof Generation DoS | Overwhelm proof service | High | Medium | ✅ Rate limiting<br>✅ Queue management<br>⚠️ Need monitoring |
|
||||
| Verification Spam | Flood verification contract | High | High | ✅ Gas costs limit spam<br>⚠️ Need circuit optimization |
|
||||
|
||||
#### Confidential Transactions
|
||||
| Threat | Description | Likelihood | Impact | Mitigations |
|
||||
|--------|-------------|------------|--------|-------------|
|
||||
| Key Exhaustion | Deplete HSM key slots | Medium | Medium | ✅ Key rotation<br>✅ Resource monitoring |
|
||||
| Database Overload | Saturate with encrypted data | High | Medium | ✅ Connection pooling<br>✅ Query optimization |
|
||||
| Audit Log Flooding | Fill audit storage | Medium | Medium | ✅ Log rotation<br>✅ Storage monitoring |
|
||||
|
||||
### 6. Elevation of Privilege
|
||||
|
||||
#### ZK Receipt Attestation
|
||||
| Threat | Description | Likelihood | Impact | Mitigations |
|
||||
|--------|-------------|------------|--------|-------------|
|
||||
| Setup Privilege | Gain trusted setup access | Low | Critical | ⚠️ Multi-party ceremony<br>⚠️ Independent audits |
|
||||
| Coordinator Compromise | Full system control | Medium | Critical | ✅ Multi-sig controls<br>✅ Regular security audits |
|
||||
|
||||
#### Confidential Transactions
|
||||
| Threat | Description | Likelihood | Impact | Mitigations |
|
||||
|--------|-------------|------------|--------|-------------|
|
||||
| HSM Takeover | Gain HSM admin access | Low | Critical | ✅ HSM access controls<br>✅ Dual authorization |
|
||||
| Access Control Escalation | Bypass role restrictions | Medium | High | ✅ Principle of least privilege<br>✅ Regular access reviews |
|
||||
|
||||
## Risk Matrix
|
||||
|
||||
| Threat | Likelihood | Impact | Risk Level | Priority |
|
||||
|--------|------------|--------|------------|----------|
|
||||
| Trusted Setup Compromise | Low | Critical | HIGH | 1 |
|
||||
| HSM Compromise | Low | Critical | HIGH | 1 |
|
||||
| Proof Forgery | Medium | High | HIGH | 2 |
|
||||
| Private Key Extraction | Low | Critical | HIGH | 2 |
|
||||
| Information Disclosure | Medium | High | MEDIUM | 3 |
|
||||
| DoS Attacks | High | Medium | MEDIUM | 3 |
|
||||
| Side-Channel Attacks | Low | Medium | LOW | 4 |
|
||||
| Repudiation | Low | Medium | LOW | 4 |
|
||||
|
||||
## Implemented Mitigations
|
||||
|
||||
### ZK Receipt Attestation
|
||||
- ✅ Groth16 soundness and zero-knowledge properties
|
||||
- ✅ On-chain verification prevents tampering
|
||||
- ✅ Open-source circuit code for transparency
|
||||
- ✅ Rate limiting on proof generation
|
||||
- ✅ Comprehensive audit logging
|
||||
|
||||
### Confidential Transactions
|
||||
- ✅ AES-256-GCM provides confidentiality and authenticity
|
||||
- ✅ HSM-backed key management prevents key extraction
|
||||
- ✅ Role-based access control with time restrictions
|
||||
- ✅ Per-transaction DEKs for forward secrecy
|
||||
- ✅ Immutable audit trails with chain of hashes
|
||||
- ✅ Multi-factor authentication for sensitive operations
|
||||
|
||||
## Recommended Future Improvements
|
||||
|
||||
### Short Term (1-3 months)
|
||||
1. **Trusted Setup Ceremony**
|
||||
- Implement multi-party computation (MPC) setup
|
||||
- Engage independent auditors
|
||||
- Publicly document process
|
||||
|
||||
2. **Enhanced Monitoring**
|
||||
- Real-time threat detection
|
||||
- Anomaly detection for access patterns
|
||||
- Automated alerting for security events
|
||||
|
||||
3. **Security Testing**
|
||||
- Penetration testing by third party
|
||||
- Side-channel resistance evaluation
|
||||
- Fuzzing of circuit implementations
|
||||
|
||||
### Medium Term (3-6 months)
|
||||
1. **Advanced Privacy**
|
||||
- Differential privacy for metadata
|
||||
- Secure multi-party computation
|
||||
- Homomorphic encryption support
|
||||
|
||||
2. **Quantum Resistance**
|
||||
- Evaluate post-quantum schemes
|
||||
- Migration planning for quantum threats
|
||||
- Hybrid cryptography implementations
|
||||
|
||||
3. **Compliance Automation**
|
||||
- Automated compliance reporting
|
||||
- Privacy impact assessments
|
||||
- Regulatory audit tools
|
||||
|
||||
### Long Term (6-12 months)
|
||||
1. **Formal Verification**
|
||||
- Formal proofs of circuit correctness
|
||||
- Verified smart contract deployments
|
||||
- Mathematical security proofs
|
||||
|
||||
2. **Decentralized Trust**
|
||||
- Distributed key generation
|
||||
- Threshold cryptography
|
||||
- Community governance of security
|
||||
|
||||
## Security Controls Summary
|
||||
|
||||
### Preventive Controls
|
||||
- Cryptographic guarantees (ZK proofs, encryption)
|
||||
- Access control mechanisms
|
||||
- Secure key management
|
||||
- Network security (TLS, certificates)
|
||||
|
||||
### Detective Controls
|
||||
- Comprehensive audit logging
|
||||
- Real-time monitoring
|
||||
- Anomaly detection
|
||||
- Security incident response
|
||||
|
||||
### Corrective Controls
|
||||
- Key rotation procedures
|
||||
- Incident response playbooks
|
||||
- Backup and recovery
|
||||
- System patching processes
|
||||
|
||||
### Compensating Controls
|
||||
- Insurance for cryptographic risks
|
||||
- Legal protections
|
||||
- Community oversight
|
||||
- Bug bounty programs
|
||||
|
||||
## Compliance Mapping
|
||||
|
||||
| Regulation | Requirement | Implementation |
|
||||
|------------|-------------|----------------|
|
||||
| GDPR | Right to encryption | ✅ Opt-in confidential transactions |
|
||||
| GDPR | Data minimization | ✅ Selective disclosure |
|
||||
| SEC 17a-4 | Audit trail | ✅ Immutable logs |
|
||||
| MiFID II | Transaction reporting | ✅ ZK proof verification |
|
||||
| PCI DSS | Key management | ✅ HSM-backed keys |
|
||||
|
||||
## Incident Response
|
||||
|
||||
### Security Event Classification
|
||||
1. **Critical** - HSM compromise, trusted setup breach
|
||||
2. **High** - Large-scale data breach, proof forgery
|
||||
3. **Medium** - Single key compromise, access violation
|
||||
4. **Low** - Failed authentication, minor DoS
|
||||
|
||||
### Response Procedures
|
||||
1. Immediate containment
|
||||
2. Evidence preservation
|
||||
3. Stakeholder notification
|
||||
4. Root cause analysis
|
||||
5. Remediation actions
|
||||
6. Post-incident review
|
||||
|
||||
## Review Schedule
|
||||
|
||||
- **Monthly**: Security monitoring review
|
||||
- **Quarterly**: Threat model update
|
||||
- **Semi-annually**: Penetration testing
|
||||
- **Annually**: Full security audit
|
||||
|
||||
## Contact Information
|
||||
|
||||
- Security Team: security@aitbc.io
|
||||
- Bug Reports: security-bugs@aitbc.io
|
||||
- Security Researchers: research@aitbc.io
|
||||
|
||||
## Acknowledgments
|
||||
|
||||
This threat model was developed with input from:
|
||||
- AITBC Security Team
|
||||
- External Security Consultants
|
||||
- Community Security Researchers
|
||||
- Cryptography Experts
|
||||
|
||||
---
|
||||
|
||||
*This document is living and will be updated as new threats emerge and mitigations are implemented.*
|
||||
166
docs/reference/zk-implementation-summary.md
Normal file
166
docs/reference/zk-implementation-summary.md
Normal file
@ -0,0 +1,166 @@
|
||||
# ZK Receipt Attestation Implementation Summary
|
||||
|
||||
## Overview
|
||||
|
||||
Successfully implemented a zero-knowledge proof system for privacy-preserving receipt attestation in AITBC, enabling confidential settlements while maintaining verifiability.
|
||||
|
||||
## Components Implemented
|
||||
|
||||
### 1. ZK Circuits (`apps/zk-circuits/`)
|
||||
- **Basic Circuit**: Receipt hash preimage proof in circom
|
||||
- **Advanced Circuit**: Full receipt validation with pricing (WIP)
|
||||
- **Build System**: npm scripts for compilation, setup, and proving
|
||||
- **Testing**: Proof generation and verification tests
|
||||
- **Benchmarking**: Performance measurement tools
|
||||
|
||||
### 2. Proof Service (`apps/coordinator-api/src/app/services/zk_proofs.py`)
|
||||
- **ZKProofService**: Handles proof generation and verification
|
||||
- **Privacy Levels**: Basic (hide computation) and Enhanced (hide amounts)
|
||||
- **Integration**: Works with existing receipt signing system
|
||||
- **Error Handling**: Graceful fallback when ZK unavailable
|
||||
|
||||
### 3. Receipt Integration (`apps/coordinator-api/src/app/services/receipts.py`)
|
||||
- **Async Support**: Updated create_receipt to support async ZK generation
|
||||
- **Optional Privacy**: ZK proofs generated only when requested
|
||||
- **Backward Compatibility**: Existing receipts work unchanged
|
||||
|
||||
### 4. Verification Contract (`contracts/ZKReceiptVerifier.sol`)
|
||||
- **On-Chain Verification**: Groth16 proof verification
|
||||
- **Security Features**: Double-spend prevention, timestamp validation
|
||||
- **Authorization**: Controlled access to verification functions
|
||||
- **Batch Support**: Efficient batch verification
|
||||
|
||||
### 5. Settlement Integration (`apps/coordinator-api/aitbc/settlement/hooks.py`)
|
||||
- **Privacy Options**: Settlement requests can specify privacy level
|
||||
- **Proof Inclusion**: ZK proofs included in settlement messages
|
||||
- **Bridge Support**: Works with existing cross-chain bridges
|
||||
|
||||
## Key Features
|
||||
|
||||
### Privacy Levels
|
||||
1. **Basic**: Hide computation details, reveal settlement amount
|
||||
2. **Enhanced**: Hide all amounts, prove correctness mathematically
|
||||
|
||||
### Performance Metrics
|
||||
- **Proof Size**: ~200 bytes (Groth16)
|
||||
- **Generation Time**: 5-15 seconds
|
||||
- **Verification Time**: <5ms on-chain
|
||||
- **Gas Cost**: ~200k gas
|
||||
|
||||
### Security Measures
|
||||
- Trusted setup requirements documented
|
||||
- Circuit audit procedures defined
|
||||
- Gradual rollout strategy
|
||||
- Emergency pause capabilities
|
||||
|
||||
## Testing Coverage
|
||||
|
||||
### Unit Tests
|
||||
- Proof generation with various inputs
|
||||
- Verification success/failure scenarios
|
||||
- Privacy level validation
|
||||
- Error handling
|
||||
|
||||
### Integration Tests
|
||||
- Receipt creation with ZK proofs
|
||||
- Settlement flow with privacy
|
||||
- Cross-chain bridge integration
|
||||
|
||||
### Benchmarks
|
||||
- Proof generation time measurement
|
||||
- Verification performance
|
||||
- Memory usage tracking
|
||||
- Gas cost estimation
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Creating Private Receipt
|
||||
```python
|
||||
receipt = await receipt_service.create_receipt(
|
||||
job=job,
|
||||
miner_id=miner_id,
|
||||
job_result=result,
|
||||
result_metrics=metrics,
|
||||
privacy_level="basic" # Enable ZK proof
|
||||
)
|
||||
```
|
||||
|
||||
### Cross-Chain Settlement with Privacy
|
||||
```python
|
||||
settlement = await settlement_hook.initiate_manual_settlement(
|
||||
job_id="job-123",
|
||||
target_chain_id=2,
|
||||
use_zk_proof=True,
|
||||
privacy_level="enhanced"
|
||||
)
|
||||
```
|
||||
|
||||
### On-Chain Verification
|
||||
```solidity
|
||||
bool verified = verifier.verifyAndRecord(
|
||||
proof.a,
|
||||
proof.b,
|
||||
proof.c,
|
||||
proof.publicSignals
|
||||
);
|
||||
```
|
||||
|
||||
## Current Status
|
||||
|
||||
### Completed ✅
|
||||
1. Research and technology selection (Groth16)
|
||||
2. Development environment setup
|
||||
3. Basic circuit implementation
|
||||
4. Proof generation service
|
||||
5. Verification contract
|
||||
6. Settlement integration
|
||||
7. Comprehensive testing
|
||||
8. Performance benchmarking
|
||||
|
||||
### Pending ⏳
|
||||
1. Trusted setup ceremony (production requirement)
|
||||
2. Circuit security audit
|
||||
3. Full receipt validation circuit
|
||||
4. Production deployment
|
||||
|
||||
## Next Steps for Production
|
||||
|
||||
### Immediate (Week 1-2)
|
||||
1. Run end-to-end tests with real data
|
||||
2. Performance optimization based on benchmarks
|
||||
3. Security review of implementation
|
||||
|
||||
### Short Term (Month 1)
|
||||
1. Plan and execute trusted setup ceremony
|
||||
2. Complete advanced circuit with signature verification
|
||||
3. Third-party security audit
|
||||
|
||||
### Long Term (Month 2-3)
|
||||
1. Production deployment with gradual rollout
|
||||
2. Monitor performance and gas costs
|
||||
3. Consider PLONK for universal setup
|
||||
|
||||
## Risks and Mitigations
|
||||
|
||||
### Technical Risks
|
||||
- **Trusted Setup**: Mitigate with multi-party ceremony
|
||||
- **Performance**: Optimize circuits and use batch verification
|
||||
- **Complexity**: Maintain clear documentation and examples
|
||||
|
||||
### Operational Risks
|
||||
- **User Adoption**: Provide clear UI indicators for privacy
|
||||
- **Gas Costs**: Optimize proof size and verification
|
||||
- **Regulatory**: Ensure compliance with privacy regulations
|
||||
|
||||
## Documentation
|
||||
|
||||
- [ZK Technology Comparison](zk-technology-comparison.md)
|
||||
- [Circuit Design](zk-receipt-attestation.md)
|
||||
- [Development Guide](../apps/zk-circuits/README.md)
|
||||
- [API Documentation](../docs/api/coordinator/endpoints.md)
|
||||
|
||||
## Conclusion
|
||||
|
||||
The ZK receipt attestation system provides a solid foundation for privacy-preserving settlements in AITBC. The implementation balances privacy, performance, and usability while maintaining backward compatibility with existing systems.
|
||||
|
||||
The modular design allows for gradual adoption and future enhancements, making it suitable for both testing and production deployment.
|
||||
260
docs/reference/zk-receipt-attestation.md
Normal file
260
docs/reference/zk-receipt-attestation.md
Normal file
@ -0,0 +1,260 @@
|
||||
# Zero-Knowledge Receipt Attestation Design
|
||||
|
||||
## Overview
|
||||
|
||||
This document outlines the design for adding zero-knowledge proof capabilities to the AITBC receipt attestation system, enabling privacy-preserving settlement flows while maintaining verifiability.
|
||||
|
||||
## Goals
|
||||
|
||||
1. **Privacy**: Hide sensitive transaction details (amounts, parties, specific computations)
|
||||
2. **Verifiability**: Prove receipts are valid and correctly signed without revealing contents
|
||||
3. **Compatibility**: Work with existing receipt signing and settlement systems
|
||||
4. **Efficiency**: Minimize proof generation and verification overhead
|
||||
|
||||
## Architecture
|
||||
|
||||
### Current Receipt System
|
||||
|
||||
The existing system has:
|
||||
- Receipt signing with coordinator private key
|
||||
- Optional coordinator attestations
|
||||
- History retrieval endpoints
|
||||
- Cross-chain settlement hooks
|
||||
|
||||
Receipt structure includes:
|
||||
- Job ID and metadata
|
||||
- Computation results
|
||||
- Pricing information
|
||||
- Miner and coordinator signatures
|
||||
|
||||
### Privacy-Preserving Flow
|
||||
|
||||
```
|
||||
1. Job Execution
|
||||
↓
|
||||
2. Receipt Generation (clear text)
|
||||
↓
|
||||
3. ZK Circuit Input Preparation
|
||||
↓
|
||||
4. ZK Proof Generation
|
||||
↓
|
||||
5. On-Chain Settlement (with proof)
|
||||
↓
|
||||
6. Verification (without revealing data)
|
||||
```
|
||||
|
||||
## ZK Circuit Design
|
||||
|
||||
### What to Prove
|
||||
|
||||
1. **Receipt Validity**
|
||||
- Receipt was signed by coordinator
|
||||
- Computation was performed correctly
|
||||
- Pricing follows agreed rules
|
||||
|
||||
2. **Settlement Conditions**
|
||||
- Amount owed is correctly calculated
|
||||
- Parties have sufficient funds/balance
|
||||
- Cross-chain transfer conditions met
|
||||
|
||||
### What to Hide
|
||||
|
||||
1. **Sensitive Data**
|
||||
- Actual computation amounts
|
||||
- Specific job details
|
||||
- Pricing rates
|
||||
- Participant identities
|
||||
|
||||
### Circuit Components
|
||||
|
||||
```circom
|
||||
// High-level circuit structure
|
||||
template ReceiptAttestation() {
|
||||
// Public inputs
|
||||
signal input receiptHash;
|
||||
signal input settlementAmount;
|
||||
signal input timestamp;
|
||||
|
||||
// Private inputs
|
||||
signal input receipt;
|
||||
signal input computationResult;
|
||||
signal input pricingRate;
|
||||
signal input minerReward;
|
||||
|
||||
// Verify receipt signature
|
||||
component signatureVerifier = ECDSAVerify();
|
||||
// ... signature verification logic
|
||||
|
||||
// Verify computation correctness
|
||||
component computationChecker = ComputationVerify();
|
||||
// ... computation verification logic
|
||||
|
||||
// Verify pricing calculation
|
||||
component pricingVerifier = PricingVerify();
|
||||
// ... pricing verification logic
|
||||
|
||||
// Output settlement proof
|
||||
settlementAmount <== minerReward + coordinatorFee;
|
||||
}
|
||||
```
|
||||
|
||||
## Implementation Plan
|
||||
|
||||
### Phase 1: Research & Prototyping
|
||||
1. **Library Selection**
|
||||
- snarkjs for development (JavaScript/TypeScript)
|
||||
- circomlib2 for standard circuits
|
||||
- Web3.js for blockchain integration
|
||||
|
||||
2. **Basic Circuit**
|
||||
- Simple receipt hash preimage proof
|
||||
- ECDSA signature verification
|
||||
- Basic arithmetic operations
|
||||
|
||||
### Phase 2: Integration
|
||||
1. **Coordinator API Updates**
|
||||
- Add ZK proof generation endpoint
|
||||
- Integrate with existing receipt signing
|
||||
- Add proof verification utilities
|
||||
|
||||
2. **Settlement Flow**
|
||||
- Modify cross-chain hooks to accept proofs
|
||||
- Update verification logic
|
||||
- Maintain backward compatibility
|
||||
|
||||
### Phase 3: Optimization
|
||||
1. **Performance**
|
||||
- Trusted setup for Groth16
|
||||
- Batch proof generation
|
||||
- Recursive proofs for complex receipts
|
||||
|
||||
2. **Security**
|
||||
- Audit circuits
|
||||
- Formal verification
|
||||
- Side-channel resistance
|
||||
|
||||
## Data Flow
|
||||
|
||||
### Proof Generation (Coordinator)
|
||||
|
||||
```python
|
||||
async def generate_receipt_proof(receipt: Receipt) -> ZKProof:
|
||||
# 1. Prepare circuit inputs
|
||||
public_inputs = {
|
||||
"receiptHash": hash_receipt(receipt),
|
||||
"settlementAmount": calculate_settlement(receipt),
|
||||
"timestamp": receipt.timestamp
|
||||
}
|
||||
|
||||
private_inputs = {
|
||||
"receipt": receipt,
|
||||
"computationResult": receipt.result,
|
||||
"pricingRate": receipt.pricing.rate,
|
||||
"minerReward": receipt.pricing.miner_reward
|
||||
}
|
||||
|
||||
# 2. Generate witness
|
||||
witness = generate_witness(public_inputs, private_inputs)
|
||||
|
||||
# 3. Generate proof
|
||||
proof = groth16.prove(witness, proving_key)
|
||||
|
||||
return {
|
||||
"proof": proof,
|
||||
"publicSignals": public_inputs
|
||||
}
|
||||
```
|
||||
|
||||
### Proof Verification (On-Chain/Settlement Layer)
|
||||
|
||||
```solidity
|
||||
contract SettlementVerifier {
|
||||
// Groth16 verifier
|
||||
function verifySettlement(
|
||||
uint256[2] memory a,
|
||||
uint256[2][2] memory b,
|
||||
uint256[2] memory c,
|
||||
uint256[] memory input
|
||||
) public pure returns (bool) {
|
||||
return verifyProof(a, b, c, input);
|
||||
}
|
||||
|
||||
function settleWithProof(
|
||||
address recipient,
|
||||
uint256 amount,
|
||||
ZKProof memory proof
|
||||
) public {
|
||||
require(verifySettlement(proof.a, proof.b, proof.c, proof.inputs));
|
||||
// Execute settlement
|
||||
_transfer(recipient, amount);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Privacy Levels
|
||||
|
||||
### Level 1: Basic Privacy
|
||||
- Hide computation amounts
|
||||
- Prove pricing correctness
|
||||
- Reveal participant identities
|
||||
|
||||
### Level 2: Enhanced Privacy
|
||||
- Hide all amounts
|
||||
- Zero-knowledge participant proofs
|
||||
- Anonymous settlement
|
||||
|
||||
### Level 3: Full Privacy
|
||||
- Complete transaction privacy
|
||||
- Ring signatures or similar
|
||||
- Confidential transfers
|
||||
|
||||
## Security Considerations
|
||||
|
||||
1. **Trusted Setup**
|
||||
- Multi-party ceremony for Groth16
|
||||
- Documentation of setup process
|
||||
- Toxic waste destruction proof
|
||||
|
||||
2. **Circuit Security**
|
||||
- Constant-time operations
|
||||
- No side-channel leaks
|
||||
- Formal verification where possible
|
||||
|
||||
3. **Integration Security**
|
||||
- Maintain existing security guarantees
|
||||
- Fail-safe verification
|
||||
- Gradual rollout with monitoring
|
||||
|
||||
## Migration Strategy
|
||||
|
||||
1. **Parallel Operation**
|
||||
- Run both clear and ZK receipts
|
||||
- Gradual opt-in adoption
|
||||
- Performance monitoring
|
||||
|
||||
2. **Backward Compatibility**
|
||||
- Existing receipts remain valid
|
||||
- Optional ZK proofs
|
||||
- Graceful degradation
|
||||
|
||||
3. **Network Upgrade**
|
||||
- Coordinate with all participants
|
||||
- Clear communication
|
||||
- Rollback capability
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. **Research Task**
|
||||
- Evaluate zk-SNARKs vs zk-STARKs trade-offs
|
||||
- Benchmark proof generation times
|
||||
- Assess gas costs for on-chain verification
|
||||
|
||||
2. **Prototype Development**
|
||||
- Implement basic circuit in circom
|
||||
- Create proof generation service
|
||||
- Build verification contract
|
||||
|
||||
3. **Integration Planning**
|
||||
- Design API changes
|
||||
- Plan data migration
|
||||
- Prepare rollout strategy
|
||||
181
docs/reference/zk-technology-comparison.md
Normal file
181
docs/reference/zk-technology-comparison.md
Normal file
@ -0,0 +1,181 @@
|
||||
# ZK Technology Comparison for Receipt Attestation
|
||||
|
||||
## Overview
|
||||
|
||||
Analysis of zero-knowledge proof systems for AITBC receipt attestation, focusing on practical considerations for integration with existing infrastructure.
|
||||
|
||||
## Technology Options
|
||||
|
||||
### 1. zk-SNARKs (Zero-Knowledge Succinct Non-Interactive Argument of Knowledge)
|
||||
|
||||
**Examples**: Groth16, PLONK, Halo2
|
||||
|
||||
**Pros**:
|
||||
- **Small proof size**: ~200 bytes for Groth16
|
||||
- **Fast verification**: Constant time, ~3ms on-chain
|
||||
- **Mature ecosystem**: circom, snarkjs, bellman, arkworks
|
||||
- **Low gas costs**: ~200k gas for verification on Ethereum
|
||||
- **Industry adoption**: Used by Aztec, Tornado Cash, Zcash
|
||||
|
||||
**Cons**:
|
||||
- **Trusted setup**: Required for Groth16 (toxic waste problem)
|
||||
- **Longer proof generation**: 10-30 seconds depending on circuit size
|
||||
- **Complex setup**: Ceremony needs multiple participants
|
||||
- **Quantum vulnerability**: Not post-quantum secure
|
||||
|
||||
### 2. zk-STARKs (Zero-Knowledge Scalable Transparent Argument of Knowledge)
|
||||
|
||||
**Examples**: STARKEx, Winterfell, gnark
|
||||
|
||||
**Pros**:
|
||||
- **No trusted setup**: Transparent setup process
|
||||
- **Post-quantum secure**: Resistant to quantum attacks
|
||||
- **Faster proving**: Often faster than SNARKs for large circuits
|
||||
- **Transparent**: No toxic waste, fully verifiable setup
|
||||
|
||||
**Cons**:
|
||||
- **Larger proofs**: ~45KB for typical circuits
|
||||
- **Higher verification cost**: ~500k-1M gas on-chain
|
||||
- **Newer ecosystem**: Fewer tools and libraries
|
||||
- **Less adoption**: Limited production deployments
|
||||
|
||||
## Use Case Analysis
|
||||
|
||||
### Receipt Attestation Requirements
|
||||
|
||||
1. **Proof Size**: Important for on-chain storage costs
|
||||
2. **Verification Speed**: Critical for settlement latency
|
||||
3. **Setup Complexity**: Affects deployment timeline
|
||||
4. **Ecosystem Maturity**: Impacts development speed
|
||||
5. **Privacy Needs**: Moderate (hiding amounts, not full anonymity)
|
||||
|
||||
### Quantitative Comparison
|
||||
|
||||
| Metric | Groth16 (SNARK) | PLONK (SNARK) | STARK |
|
||||
|--------|----------------|---------------|-------|
|
||||
| Proof Size | 200 bytes | 400-500 bytes | 45KB |
|
||||
| Prover Time | 10-30s | 5-15s | 2-10s |
|
||||
| Verifier Time | 3ms | 5ms | 50ms |
|
||||
| Gas Cost | 200k | 300k | 800k |
|
||||
| Trusted Setup | Yes | Universal | No |
|
||||
| Library Support | Excellent | Good | Limited |
|
||||
|
||||
## Recommendation
|
||||
|
||||
### Phase 1: Groth16 for MVP
|
||||
|
||||
**Rationale**:
|
||||
1. **Proven technology**: Battle-tested in production
|
||||
2. **Small proofs**: Essential for cost-effective on-chain verification
|
||||
3. **Fast verification**: Critical for settlement performance
|
||||
4. **Tool maturity**: circom + snarkjs ecosystem
|
||||
5. **Community knowledge**: Extensive documentation and examples
|
||||
|
||||
**Mitigations for trusted setup**:
|
||||
- Multi-party ceremony with >100 participants
|
||||
- Public documentation of process
|
||||
- Consider PLONK for Phase 2 if setup becomes bottleneck
|
||||
|
||||
### Phase 2: Evaluate PLONK
|
||||
|
||||
**Rationale**:
|
||||
- Universal trusted setup (one-time for all circuits)
|
||||
- Slightly larger proofs but acceptable
|
||||
- More flexible for circuit updates
|
||||
- Growing ecosystem support
|
||||
|
||||
### Phase 3: Consider STARKs
|
||||
|
||||
**Rationale**:
|
||||
- If quantum resistance becomes priority
|
||||
- If proof size optimizations improve
|
||||
- If gas costs become less critical
|
||||
|
||||
## Implementation Strategy
|
||||
|
||||
### Circuit Complexity Analysis
|
||||
|
||||
**Basic Receipt Circuit**:
|
||||
- Hash verification: ~50 constraints
|
||||
- Signature verification: ~10,000 constraints
|
||||
- Arithmetic operations: ~100 constraints
|
||||
- Total: ~10,150 constraints
|
||||
|
||||
**With Privacy Features**:
|
||||
- Range proofs: ~1,000 constraints
|
||||
- Merkle proofs: ~1,000 constraints
|
||||
- Additional checks: ~500 constraints
|
||||
- Total: ~12,650 constraints
|
||||
|
||||
### Performance Estimates
|
||||
|
||||
**Groth16**:
|
||||
- Setup time: 2-5 hours
|
||||
- Proving time: 5-15 seconds
|
||||
- Verification: 3ms
|
||||
- Proof size: 200 bytes
|
||||
|
||||
**Infrastructure Impact**:
|
||||
- Coordinator: Additional 5-15s per receipt
|
||||
- Settlement layer: Minimal impact (fast verification)
|
||||
- Storage: Negligible increase
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### Trusted Setup Risks
|
||||
|
||||
1. **Toxic Waste**: If compromised, can forge proofs
|
||||
2. **Setup Integrity**: Requires honest participants
|
||||
3. **Documentation**: Must be publicly verifiable
|
||||
|
||||
### Mitigation Strategies
|
||||
|
||||
1. **Multi-party Ceremony**:
|
||||
- Minimum 100 participants
|
||||
- Geographically distributed
|
||||
- Public livestream
|
||||
|
||||
2. **Circuit Audits**:
|
||||
- Formal verification where possible
|
||||
- Third-party security review
|
||||
- Public disclosure of circuits
|
||||
|
||||
3. **Gradual Rollout**:
|
||||
- Start with low-value transactions
|
||||
- Monitor for anomalies
|
||||
- Emergency pause capability
|
||||
|
||||
## Development Plan
|
||||
|
||||
### Week 1-2: Environment Setup
|
||||
- Install circom and snarkjs
|
||||
- Create basic test circuit
|
||||
- Benchmark proof generation
|
||||
|
||||
### Week 3-4: Basic Circuit
|
||||
- Implement receipt hash verification
|
||||
- Add signature verification
|
||||
- Test with sample receipts
|
||||
|
||||
### Week 5-6: Integration
|
||||
- Add to coordinator API
|
||||
- Create verification contract
|
||||
- Test settlement flow
|
||||
|
||||
### Week 7-8: Trusted Setup
|
||||
- Plan ceremony logistics
|
||||
- Prepare ceremony software
|
||||
- Execute multi-party setup
|
||||
|
||||
### Week 9-10: Testing & Audit
|
||||
- End-to-end testing
|
||||
- Security review
|
||||
- Performance optimization
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. **Immediate**: Set up development environment
|
||||
2. **Research**: Deep dive into circom best practices
|
||||
3. **Prototype**: Build minimal viable circuit
|
||||
4. **Evaluate**: Performance with real receipt data
|
||||
5. **Decide**: Final technology choice based on testing
|
||||
27
docs/requirements.txt
Normal file
27
docs/requirements.txt
Normal file
@ -0,0 +1,27 @@
|
||||
# MkDocs Material Theme
|
||||
mkdocs-material==9.4.8
|
||||
mkdocs-material-extensions==1.3.1
|
||||
|
||||
# MkDocs Core and Plugins
|
||||
mkdocs==1.5.3
|
||||
mkdocs-git-revision-date-localized-plugin==1.2.6
|
||||
mkdocs-awesome-pages-plugin==2.9.2
|
||||
mkdocs-minify-plugin==0.7.4
|
||||
mkdocs-glightbox==0.3.4
|
||||
mkdocs-video==1.5.0
|
||||
mkdocs-social-plugin==1.0.0
|
||||
mkdocs-macros-plugin==1.0.5
|
||||
|
||||
# Python Extensions for Markdown
|
||||
pymdown-extensions==10.8.1
|
||||
markdown-include==0.8.0
|
||||
mkdocs-mermaid2-plugin==1.1.1
|
||||
|
||||
# Additional dependencies
|
||||
requests==2.31.0
|
||||
aiohttp==3.9.1
|
||||
python-dotenv==1.0.0
|
||||
|
||||
# Development dependencies
|
||||
mkdocs-redirects==1.2.1
|
||||
mkdocs-monorepo-plugin==1.0.2
|
||||
204
docs/roadmap-retrospective-template.md
Normal file
204
docs/roadmap-retrospective-template.md
Normal file
@ -0,0 +1,204 @@
|
||||
# AITBC Roadmap Retrospective - [Period]
|
||||
|
||||
**Date**: [Date]
|
||||
**Period**: [e.g., H1 2024, H2 2024]
|
||||
**Authors**: AITBC Core Team
|
||||
|
||||
## Executive Summary
|
||||
|
||||
[Brief 2-3 sentence summary of the period's achievements and challenges]
|
||||
|
||||
## KPI Performance Review
|
||||
|
||||
### Key Metrics
|
||||
|
||||
| KPI | Target | Actual | Status | Notes |
|
||||
|-----|--------|--------|--------|-------|
|
||||
| Active Marketplaces | [target] | [actual] | ✅/⚠️/❌ | [comments] |
|
||||
| Cross-Chain Volume | [target] | [actual] | ✅/⚠️/❌ | [comments] |
|
||||
| Active Developers | [target] | [actual] | ✅/⚠️/❌ | [comments] |
|
||||
| TVL (Total Value Locked) | [target] | [actual] | ✅/⚠️/❌ | [comments] |
|
||||
| Transaction Volume | [target] | [actual] | ✅/⚠️/❌ | [comments] |
|
||||
|
||||
### Performance Analysis
|
||||
|
||||
#### Achievements
|
||||
- [List 3-5 major achievements]
|
||||
- [Include metrics and impact]
|
||||
|
||||
#### Challenges
|
||||
- [List 2-3 key challenges]
|
||||
- [Include root causes if known]
|
||||
|
||||
#### Learnings
|
||||
- [Key insights from the period]
|
||||
- [What worked well]
|
||||
- [What didn't work as expected]
|
||||
|
||||
## Roadmap Progress
|
||||
|
||||
### Completed Items
|
||||
|
||||
#### Stage 7 - Community & Governance
|
||||
- ✅ [Item] - [Date completed] - [Brief description]
|
||||
- ✅ [Item] - [Date completed] - [Brief description]
|
||||
|
||||
#### Stage 8 - Frontier R&D & Global Expansion
|
||||
- ✅ [Item] - [Date completed] - [Brief description]
|
||||
- ✅ [Item] - [Date completed] - [Brief description]
|
||||
|
||||
### In Progress Items
|
||||
|
||||
#### [Stage Name]
|
||||
- ⏳ [Item] - [Progress %] - [ETA] - [Blockers if any]
|
||||
- ⏳ [Item] - [Progress %] - [ETA] - [Blockers if any]
|
||||
|
||||
### Delayed Items
|
||||
|
||||
#### [Stage Name]
|
||||
- ⏸️ [Item] - [Original date] → [New date] - [Reason for delay]
|
||||
- ⏸️ [Item] - [Original date] → [New date] - [Reason for delay]
|
||||
|
||||
### New Items Added
|
||||
|
||||
- 🆕 [Item] - [Added date] - [Priority] - [Rationale]
|
||||
|
||||
## Ecosystem Health
|
||||
|
||||
### Developer Ecosystem
|
||||
- **New Developers**: [number]
|
||||
- **Active Projects**: [number]
|
||||
- **GitHub Stars**: [number]
|
||||
- **Community Engagement**: [description]
|
||||
|
||||
### User Adoption
|
||||
- **Active Users**: [number]
|
||||
- **Transaction Growth**: [percentage]
|
||||
- **Geographic Distribution**: [key regions]
|
||||
|
||||
### Partner Ecosystem
|
||||
- **New Partners**: [number]
|
||||
- **Integration Status**: [description]
|
||||
- **Success Stories**: [1-2 examples]
|
||||
|
||||
## Technical Achievements
|
||||
|
||||
### Major Releases
|
||||
- [Release Name] - [Date] - [Key features]
|
||||
- [Release Name] - [Date] - [Key features]
|
||||
|
||||
### Research Outcomes
|
||||
- [Paper/Prototype] - [Status] - [Impact]
|
||||
- [Research Area] - [Findings] - [Next steps]
|
||||
|
||||
### Infrastructure Improvements
|
||||
- [Improvement] - [Impact] - [Metrics]
|
||||
|
||||
## Community & Governance
|
||||
|
||||
### Governance Participation
|
||||
- **Proposal Submissions**: [number]
|
||||
- **Voting Turnout**: [percentage]
|
||||
- **Community Discussions**: [key topics]
|
||||
|
||||
### Community Initiatives
|
||||
- [Initiative] - [Participation] - [Outcomes]
|
||||
- [Initiative] - [Participation] - [Outcomes]
|
||||
|
||||
### Events & Activities
|
||||
- [Event] - [Attendance] - [Feedback]
|
||||
- [Event] - [Attendance] - [Feedback]
|
||||
|
||||
## Financial Overview
|
||||
|
||||
### Treasury Status
|
||||
- **Balance**: [amount]
|
||||
- **Burn Rate**: [amount/month]
|
||||
- **Runway**: [months]
|
||||
|
||||
### Grant Program
|
||||
- **Grants Awarded**: [number]
|
||||
- **Total Amount**: [amount]
|
||||
- **Success Rate**: [percentage]
|
||||
|
||||
## Risk Assessment
|
||||
|
||||
### Technical Risks
|
||||
- [Risk] - [Probability] - [Impact] - [Mitigation]
|
||||
|
||||
### Market Risks
|
||||
- [Risk] - [Probability] - [Impact] - [Mitigation]
|
||||
|
||||
### Operational Risks
|
||||
- [Risk] - [Probability] - [Impact] - [Mitigation]
|
||||
|
||||
## Next Period Goals
|
||||
|
||||
### Primary Objectives
|
||||
1. [Objective] - [Success criteria]
|
||||
2. [Objective] - [Success criteria]
|
||||
3. [Objective] - [Success criteria]
|
||||
|
||||
### Key Initiatives
|
||||
- [Initiative] - [Owner] - [Timeline]
|
||||
- [Initiative] - [Owner] - [Timeline]
|
||||
- [Initiative] - [Owner] - [Timeline]
|
||||
|
||||
### Resource Requirements
|
||||
- **Team**: [needs]
|
||||
- **Budget**: [amount]
|
||||
- **Partnerships**: [requirements]
|
||||
|
||||
## Long-term Vision Updates
|
||||
|
||||
### Strategy Adjustments
|
||||
- [Adjustment] - [Rationale] - [Expected impact]
|
||||
|
||||
### New Opportunities
|
||||
- [Opportunity] - [Potential] - [Next steps]
|
||||
|
||||
### Timeline Revisions
|
||||
- [Milestone] - [Original] → [Revised] - [Reason]
|
||||
|
||||
## Feedback & Suggestions
|
||||
|
||||
### Community Feedback
|
||||
- [Summary of key feedback]
|
||||
- [Action items]
|
||||
|
||||
### Partner Feedback
|
||||
- [Summary of key feedback]
|
||||
- [Action items]
|
||||
|
||||
### Internal Feedback
|
||||
- [Summary of key feedback]
|
||||
- [Action items]
|
||||
|
||||
## Appendices
|
||||
|
||||
### A. Detailed Metrics
|
||||
[Additional charts and data]
|
||||
|
||||
### B. Project Timeline
|
||||
[Visual timeline with dependencies]
|
||||
|
||||
### C. Risk Register
|
||||
[Detailed risk matrix]
|
||||
|
||||
### D. Action Item Tracker
|
||||
[List of action items with owners and due dates]
|
||||
|
||||
---
|
||||
|
||||
**Next Review Date**: [Date]
|
||||
**Document Version**: [version]
|
||||
**Distribution**: [list of recipients]
|
||||
|
||||
## Approval
|
||||
|
||||
| Role | Name | Signature | Date |
|
||||
|------|------|-----------|------|
|
||||
| Project Lead | | | |
|
||||
| Tech Lead | | | |
|
||||
| Community Lead | | | |
|
||||
| Ecosystem Lead | | | |
|
||||
225
docs/roadmap.md
225
docs/roadmap.md
@ -1,225 +0,0 @@
|
||||
# AITBC Development Roadmap
|
||||
|
||||
This roadmap aggregates high-priority tasks derived from the bootstrap specifications in `docs/bootstrap/` and tracks progress across the monorepo. Update this document as milestones evolve.
|
||||
|
||||
## Stage 1 — Upcoming Focus Areas
|
||||
|
||||
- **Blockchain Node Foundations**
|
||||
- ✅ Bootstrap module layout in `apps/blockchain-node/src/`.
|
||||
- ✅ Implement SQLModel schemas and RPC stubs aligned with historical/attested receipts.
|
||||
|
||||
- **Explorer Web Enablement**
|
||||
- ✅ Finish mock integration across all pages and polish styling + mock/live toggle.
|
||||
- ✅ Begin wiring coordinator endpoints (e.g., `/v1/jobs/{job_id}/receipts`).
|
||||
|
||||
- **Marketplace Web Scaffolding**
|
||||
- ✅ Scaffold Vite/vanilla frontends consuming coordinator receipt history endpoints and SDK examples.
|
||||
|
||||
- **Pool Hub Services**
|
||||
- ✅ Initialize FastAPI project, scoring registry, and telemetry ingestion hooks leveraging coordinator/miner metrics.
|
||||
|
||||
- **CI Enhancements**
|
||||
- ✅ Add blockchain-node tests once available and frontend build/lint checks to `.github/workflows/python-tests.yml` or follow-on workflows.
|
||||
- ✅ Provide systemd unit + installer scripts under `scripts/` for streamlined deployment.
|
||||
|
||||
## Stage 2 — Core Services (MVP)
|
||||
|
||||
- **Coordinator API**
|
||||
- ✅ Scaffold FastAPI project (`apps/coordinator-api/src/app/`).
|
||||
- ✅ Implement job submission, status, result endpoints.
|
||||
- ✅ Add miner registration, heartbeat, poll, result routes.
|
||||
- ✅ Wire SQLite persistence for jobs, miners, receipts (historical `JobReceipt` table).
|
||||
- ✅ Provide `.env.example`, `pyproject.toml`, and run scripts.
|
||||
|
||||
- **Miner Node**
|
||||
- ✅ Implement capability probe and control loop (register → heartbeat → fetch jobs).
|
||||
- ✅ Build CLI and Python runners with sandboxed work dirs (result reporting stubbed to coordinator).
|
||||
|
||||
- **Blockchain Node**
|
||||
- ✅ Define SQLModel schema for blocks, transactions, accounts, receipts (`apps/blockchain-node/src/aitbc_chain/models.py`).
|
||||
- ✅ Harden schema parity across runtime + storage:
|
||||
- Alembic baseline + follow-on migrations in `apps/blockchain-node/migrations/` now track the SQLModel schema (blocks, transactions, receipts, accounts).
|
||||
- Added `Relationship` + `ForeignKey` wiring in `apps/blockchain-node/src/aitbc_chain/models.py` for block ↔ transaction ↔ receipt joins.
|
||||
- Introduced hex/enum validation hooks via Pydantic validators to ensure hash integrity and safe persistence.
|
||||
- ✅ Implement PoA proposer loop with block assembly (`apps/blockchain-node/src/aitbc_chain/consensus/poa.py`).
|
||||
- ✅ Expose REST RPC endpoints for tx submission, balances, receipts (`apps/blockchain-node/src/aitbc_chain/rpc/router.py`).
|
||||
- ✅ Deliver WebSocket RPC + P2P gossip layer:
|
||||
- ✅ Stand up WebSocket subscription endpoints (`apps/blockchain-node/src/aitbc_chain/rpc/websocket.py`) mirroring REST payloads.
|
||||
- ✅ Implement pub/sub transport for block + transaction gossip backed by an in-memory broker (Starlette `Broadcast` or Redis) with configurable fan-out.
|
||||
- ✅ Add integration tests and load-test harness ensuring gossip convergence and back-pressure handling.
|
||||
- ✅ Ship devnet scripts (`apps/blockchain-node/scripts/`).
|
||||
- ✅ Add observability hooks (JSON logging, Prometheus metrics) and integrate coordinator mock into devnet tooling.
|
||||
- ⏳ Expand observability dashboards + miner mock integration:
|
||||
- Build Grafana dashboards for consensus health (block intervals, proposer rotation) and RPC latency (`apps/blockchain-node/observability/`).
|
||||
- Expose miner mock telemetry (job throughput, error rates) via shared Prometheus registry and ingest into blockchain-node dashboards.
|
||||
- Add alerting rules (Prometheus `Alertmanager`) for stalled proposers, queue saturation, and miner mock disconnects.
|
||||
- Wire coordinator mock into devnet tooling to simulate real-world load and validate observability hooks.
|
||||
|
||||
- **Receipt Schema**
|
||||
- ✅ Finalize canonical JSON receipt format under `protocols/receipts/` (includes sample signed receipts).
|
||||
- ✅ Implement signing/verification helpers in `packages/py/aitbc-crypto` (JS SDK pending).
|
||||
- ✅ Translate `docs/bootstrap/aitbc_tech_plan.md` contract skeleton into Solidity project (`packages/solidity/aitbc-token/`).
|
||||
- ✅ Add deployment/test scripts and document minting flow (`packages/solidity/aitbc-token/scripts/` and `docs/run.md`).
|
||||
|
||||
- **Wallet Daemon**
|
||||
- ✅ Implement encrypted keystore (Argon2id + XChaCha20-Poly1305) via `KeystoreService`.
|
||||
- ✅ Provide REST and JSON-RPC endpoints for wallet management and signing (`api_rest.py`, `api_jsonrpc.py`).
|
||||
- ✅ Add mock ledger adapter with SQLite backend powering event history (`ledger_mock/`).
|
||||
- ✅ Integrate Python receipt verification helpers (`aitbc_sdk`) and expose API/service utilities validating miner + coordinator signatures.
|
||||
- ✅ Harden REST API workflows (create/list/unlock/sign) with structured password policy enforcement and deterministic pytest coverage in `apps/wallet-daemon/tests/test_wallet_api.py`.
|
||||
- ✅ Implement Wallet SDK receipt ingestion + attestation surfacing:
|
||||
- Added `/v1/jobs/{job_id}/receipts` client helpers with cursor pagination, retry/backoff, and summary reporting (`packages/py/aitbc-sdk/src/receipts.py`).
|
||||
- Reused crypto helpers to validate miner and coordinator signatures, capturing per-key failure reasons for downstream UX.
|
||||
- Surfaced aggregated attestation status (`ReceiptStatus`) and failure diagnostics for SDK + UI consumers; JS helper parity still planned.
|
||||
|
||||
## Stage 3 — Pool Hub & Marketplace
|
||||
|
||||
- **Pool Hub**
|
||||
- ✅ Implement miner registry, scoring engine, and `/v1/match` API with Redis/PostgreSQL backing stores.
|
||||
- ✅ Add observability endpoints (`/v1/health`, `/v1/metrics`) plus Prometheus instrumentation and integration tests.
|
||||
|
||||
- **Marketplace Web**
|
||||
- ✅ Initialize Vite project with vanilla TypeScript (`apps/marketplace-web/`).
|
||||
- ✅ Build offer list, bid form, and stats cards powered by mock data fixtures (`public/mock/`).
|
||||
- ✅ Provide API abstraction toggling mock/live mode (`src/lib/api.ts`) and wire coordinator endpoints.
|
||||
- ⏳ Validate live mode against coordinator `/v1/marketplace/*` responses and add auth feature flags for rollout.
|
||||
|
||||
- **Explorer Web**
|
||||
- ✅ Initialize Vite + TypeScript project scaffold (`apps/explorer-web/`).
|
||||
- ✅ Add routed pages for overview, blocks, transactions, addresses, receipts.
|
||||
- ✅ Seed mock datasets (`public/mock/`) and fetch helpers powering overview + blocks tables.
|
||||
- ✅ Extend mock integrations to transactions, addresses, and receipts pages.
|
||||
- ✅ Implement styling system, mock/live data toggle, and coordinator API wiring scaffold.
|
||||
- ✅ Render overview stats from mock block/transaction/receipt summaries with graceful empty-state fallbacks.
|
||||
- ⏳ Validate live mode + responsive polish:
|
||||
- Hit live coordinator endpoints (`/v1/blocks`, `/v1/transactions`, `/v1/addresses`, `/v1/receipts`) via `getDataMode() === "live"` and reconcile payloads with UI models.
|
||||
- Add fallbacks + error surfacing for partial/failed live responses (toast + console diagnostics).
|
||||
- Audit responsive breakpoints (`public/css/layout.css`) and adjust grid/typography for tablet + mobile; add regression checks in Percy/Playwright snapshots.
|
||||
|
||||
## Stage 4 — Observability & Production Polish
|
||||
|
||||
- **Observability & Telemetry**
|
||||
- ⏳ Build Grafana dashboards for PoA consensus health (block intervals, proposer rotation cadence) leveraging `poa_last_block_interval_seconds`, `poa_proposer_rotations_total`, and per-proposer counters.
|
||||
- ⏳ Surface RPC latency histograms/summaries for critical endpoints (`rpc_get_head`, `rpc_send_tx`, `rpc_submit_receipt`) and add Grafana panels with SLO thresholds.
|
||||
- ⏳ Ingest miner mock telemetry (job throughput, failure rate) into the shared Prometheus registry and wire panels/alerts that correlate miner health with consensus metrics.
|
||||
|
||||
- **Explorer Web (Live Mode)**
|
||||
- ⏳ Finalize live `getDataMode() === "live"` workflow: align API payload contracts, render loading/error states, and persist mock/live toggle preference.
|
||||
- ⏳ Expand responsive testing (tablet/mobile) and add automated visual regression snapshots prior to launch.
|
||||
- ⏳ Integrate Playwright smoke tests covering overview, blocks, and transactions pages in live mode.
|
||||
|
||||
- **Marketplace Web (Launch Readiness)**
|
||||
- ✅ Connect mock listings/bids to coordinator data sources and provide feature flags for live mode rollout.
|
||||
- ✅ Implement auth/session scaffolding for marketplace actions and document API assumptions in `apps/marketplace-web/README.md`.
|
||||
- ⏳ Add Grafana panels monitoring marketplace API throughput and error rates once endpoints are live.
|
||||
|
||||
- **Operational Hardening**
|
||||
- ⏳ Extend Alertmanager rules to cover RPC error spikes, proposer stalls, and miner disconnects using the new metrics.
|
||||
- ⏳ Document dashboard import + alert deployment steps in `docs/run.md` for operators.
|
||||
- ⏳ Prepare Stage 3 release checklist linking dashboards, alerts, and smoke tests prior to production cutover.
|
||||
|
||||
## Stage 5 — Scaling & Release Readiness
|
||||
|
||||
- **Infrastructure Scaling**
|
||||
- ⏳ Benchmark blockchain node throughput under sustained load; capture CPU/memory targets and suggest horizontal scaling thresholds.
|
||||
- ⏳ Build Terraform/Helm templates for dev/staging/prod environments, including Prometheus/Grafana bundles.
|
||||
- ⏳ Implement autoscaling policies for coordinator, miners, and marketplace services with synthetic traffic tests.
|
||||
|
||||
- **Reliability & Compliance**
|
||||
- ⏳ Formalize backup/restore procedures for PostgreSQL, Redis, and ledger storage with scheduled jobs.
|
||||
- ⏳ Complete security hardening review (TLS termination, API auth, secrets management) and document mitigations in `docs/security.md`.
|
||||
- ⏳ Add chaos testing scripts (network partition, coordinator outage) and track mean-time-to-recovery metrics.
|
||||
|
||||
- **Product Launch Checklist**
|
||||
- ⏳ Finalize public documentation (API references, onboarding guides) and publish to the docs portal.
|
||||
- ⏳ Coordinate beta release timeline, including user acceptance testing of explorer/marketplace live modes.
|
||||
- ⏳ Establish post-launch monitoring playbooks and on-call rotations.
|
||||
|
||||
## Stage 6 — Ecosystem Expansion
|
||||
|
||||
- **Cross-Chain & Interop**
|
||||
- ⏳ Prototype cross-chain settlement hooks leveraging external bridges; document integration patterns.
|
||||
- ⏳ Extend SDKs (Python/JS) with pluggable transport abstractions for multi-network support.
|
||||
- ⏳ Evaluate third-party explorer/analytics integrations and publish partner onboarding guides.
|
||||
|
||||
- **Marketplace Growth**
|
||||
- ⏳ Launch incentive programs (staking, liquidity mining) and expose telemetry dashboards tracking campaign performance.
|
||||
- ⏳ Implement governance module (proposal voting, parameter changes) and add API/UX flows to explorer/marketplace.
|
||||
- ⏳ Provide SLA-backed coordinator/pool hubs with capacity planning and billing instrumentation.
|
||||
|
||||
- **Developer Experience**
|
||||
- ⏳ Publish advanced tutorials (custom proposers, marketplace extensions) and maintain versioned API docs.
|
||||
- ⏳ Integrate CI/CD pipelines with canary deployments and blue/green release automation.
|
||||
- ⏳ Host quarterly architecture reviews capturing lessons learned and feeding into roadmap revisions.
|
||||
|
||||
## Stage 7 — Innovation & Ecosystem Services
|
||||
|
||||
- **Advanced Cryptography & Privacy**
|
||||
- ⏳ Research zk-proof-based receipt attestation and prototype a privacy-preserving settlement flow.
|
||||
- ⏳ Add confidential transaction support in coordinator/miner stack with opt-in ciphertext storage.
|
||||
- ⏳ Publish threat modeling updates and share mitigations with ecosystem partners.
|
||||
|
||||
- **Enterprise Integrations**
|
||||
- ⏳ Deliver reference connectors for ERP/payment systems and document SLA expectations.
|
||||
- ⏳ Stand up multi-tenant coordinator infrastructure with per-tenant isolation and billing metrics.
|
||||
- ⏳ Launch ecosystem certification program (SDK conformance, security best practices) with public registry.
|
||||
|
||||
- **Community & Governance**
|
||||
- ⏳ Establish open RFC process, publish governance website, and schedule regular community calls.
|
||||
- ⏳ Sponsor hackathons/accelerators and provide grants for marketplace extensions and analytics tooling.
|
||||
- ⏳ Track ecosystem KPIs (active marketplaces, cross-chain volume) and feed them into quarterly strategy reviews.
|
||||
|
||||
## Stage 8 — Frontier R&D & Global Expansion
|
||||
|
||||
- **Protocol Evolution**
|
||||
- ⏳ Launch research consortium exploring next-gen consensus (hybrid PoA/PoS) and finalize whitepapers.
|
||||
- ⏳ Prototype sharding or rollup architectures to scale throughput beyond current limits.
|
||||
- ⏳ Standardize interoperability specs with industry bodies and submit proposals for adoption.
|
||||
|
||||
- **Global Rollout**
|
||||
- ⏳ Establish regional infrastructure hubs (multi-cloud) with localized compliance and data residency guarantees.
|
||||
- ⏳ Partner with regulators/enterprises to pilot regulated marketplaces and publish compliance playbooks.
|
||||
- ⏳ Expand localization (UI, documentation, support) covering top target markets.
|
||||
|
||||
- **Long-Term Sustainability**
|
||||
- ⏳ Create sustainability fund for ecosystem maintenance, bug bounties, and community stewardship.
|
||||
- ⏳ Define succession planning for core teams, including training programs and contributor pathways.
|
||||
- ⏳ Publish bi-annual roadmap retrospectives assessing KPI alignment and revising long-term goals.
|
||||
|
||||
## Stage 9 — Moonshot Initiatives
|
||||
|
||||
- **Decentralized Infrastructure**
|
||||
- ⏳ Transition coordinator/miner roles toward community-governed validator sets with incentive alignment.
|
||||
- ⏳ Explore decentralized storage/backbone options (IPFS/Filecoin) for ledger and marketplace artifacts.
|
||||
- ⏳ Prototype fully trustless marketplace settlement leveraging zero-knowledge rollups.
|
||||
|
||||
- **AI & Automation**
|
||||
- ⏳ Integrate AI-driven monitoring/anomaly detection for proposer health, market liquidity, and fraud detection.
|
||||
- ⏳ Automate incident response playbooks with ChatOps and policy engines.
|
||||
- ⏳ Launch research into autonomous agent participation (AI agents bidding/offering in the marketplace) and governance implications.
|
||||
- **Global Standards Leadership**
|
||||
- ⏳ chair industry working groups defining receipt/marketplace interoperability standards.
|
||||
- ⏳ Publish annual transparency reports and sustainability metrics for stakeholders.
|
||||
- ⏳ Engage with academia and open-source foundations to steward long-term protocol evolution.
|
||||
|
||||
### Stage 10 — Stewardship & Legacy Planning
|
||||
|
||||
- **Open Governance Maturity**
|
||||
- ⏳ Transition roadmap ownership to community-elected councils with transparent voting and treasury controls.
|
||||
- ⏳ Codify constitutional documents (mission, values, conflict resolution) and publish public charters.
|
||||
- ⏳ Implement on-chain governance modules for protocol upgrades and ecosystem-wide decisions.
|
||||
|
||||
- **Educational & Outreach Programs**
|
||||
- ⏳ Fund university partnerships, research chairs, and developer fellowships focused on decentralized marketplace tech.
|
||||
- ⏳ Create certification tracks and mentorship programs for new validator/operators.
|
||||
- ⏳ Launch annual global summit and publish proceedings to share best practices across partners.
|
||||
|
||||
- **Long-Term Preservation**
|
||||
- ⏳ Archive protocol specs, governance records, and cultural artifacts in decentralized storage with redundancy.
|
||||
- ⏳ Establish legal/organizational frameworks to ensure continuity across jurisdictions.
|
||||
- ⏳ Develop end-of-life/transition plans for legacy components, documenting deprecation strategies and migration tooling.
|
||||
|
||||
|
||||
## Shared Libraries & Examples
|
||||
the canonical checklist during implementation. Mark completed tasks with ✅ and add dates or links to relevant PRs as development progresses.
|
||||
|
||||
1
docs/roadmap.md
Symbolic link
1
docs/roadmap.md
Symbolic link
@ -0,0 +1 @@
|
||||
reference/roadmap.md
|
||||
99
docs/scripts/generate_openapi.py
Executable file
99
docs/scripts/generate_openapi.py
Executable file
@ -0,0 +1,99 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Generate OpenAPI specifications from FastAPI services
|
||||
"""
|
||||
|
||||
import json
|
||||
import sys
|
||||
import subprocess
|
||||
import requests
|
||||
from pathlib import Path
|
||||
|
||||
def extract_openapi_spec(service_name: str, base_url: str, output_file: str):
|
||||
"""Extract OpenAPI spec from a running FastAPI service"""
|
||||
try:
|
||||
# Get OpenAPI spec from the service
|
||||
response = requests.get(f"{base_url}/openapi.json")
|
||||
response.raise_for_status()
|
||||
|
||||
spec = response.json()
|
||||
|
||||
# Add service-specific metadata
|
||||
spec["info"]["title"] = f"AITBC {service_name} API"
|
||||
spec["info"]["description"] = f"OpenAPI specification for AITBC {service_name} service"
|
||||
spec["info"]["version"] = "1.0.0"
|
||||
|
||||
# Add servers configuration
|
||||
spec["servers"] = [
|
||||
{
|
||||
"url": "https://api.aitbc.io",
|
||||
"description": "Production server"
|
||||
},
|
||||
{
|
||||
"url": "https://staging-api.aitbc.io",
|
||||
"description": "Staging server"
|
||||
},
|
||||
{
|
||||
"url": "http://localhost:8011",
|
||||
"description": "Development server"
|
||||
}
|
||||
]
|
||||
|
||||
# Save the spec
|
||||
output_path = Path(output_file)
|
||||
output_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
with open(output_path, 'w') as f:
|
||||
json.dump(spec, f, indent=2)
|
||||
|
||||
print(f"✓ Generated {service_name} OpenAPI spec: {output_file}")
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
print(f"✗ Failed to generate {service_name} spec: {e}")
|
||||
return False
|
||||
|
||||
def main():
|
||||
"""Generate OpenAPI specs for all AITBC services"""
|
||||
services = [
|
||||
{
|
||||
"name": "Coordinator API",
|
||||
"base_url": "http://127.0.0.2:8011",
|
||||
"output": "api/coordinator/openapi.json"
|
||||
},
|
||||
{
|
||||
"name": "Blockchain Node API",
|
||||
"base_url": "http://127.0.0.2:8080",
|
||||
"output": "api/blockchain/openapi.json"
|
||||
},
|
||||
{
|
||||
"name": "Wallet Daemon API",
|
||||
"base_url": "http://127.0.0.2:8071",
|
||||
"output": "api/wallet/openapi.json"
|
||||
}
|
||||
]
|
||||
|
||||
print("Generating OpenAPI specifications...")
|
||||
|
||||
all_success = True
|
||||
for service in services:
|
||||
success = extract_openapi_spec(
|
||||
service["name"],
|
||||
service["base_url"],
|
||||
service["output"]
|
||||
)
|
||||
if not success:
|
||||
all_success = False
|
||||
|
||||
if all_success:
|
||||
print("\n✓ All OpenAPI specifications generated successfully!")
|
||||
print("\nNext steps:")
|
||||
print("1. Review the generated specs")
|
||||
print("2. Commit them to the documentation repository")
|
||||
print("3. Update the API reference documentation")
|
||||
else:
|
||||
print("\n✗ Some specifications failed to generate")
|
||||
sys.exit(1)
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
271
docs/transparency-report-template.md
Normal file
271
docs/transparency-report-template.md
Normal file
@ -0,0 +1,271 @@
|
||||
# AITBC Annual Transparency Report - [Year]
|
||||
|
||||
**Published**: [Date]
|
||||
**Reporting Period**: [Start Date] to [End Date]
|
||||
**Prepared By**: AITBC Foundation
|
||||
|
||||
## Executive Summary
|
||||
|
||||
[2-3 paragraph summary of the year's achievements, challenges, and strategic direction]
|
||||
|
||||
## Mission & Vision Alignment
|
||||
|
||||
### Mission Progress
|
||||
- [Progress towards decentralizing AI/ML marketplace]
|
||||
- [Key metrics showing mission advancement]
|
||||
- [Community impact stories]
|
||||
|
||||
### Vision Milestones
|
||||
- [Technical milestones achieved]
|
||||
- [Ecosystem growth metrics]
|
||||
- [Strategic partnerships formed]
|
||||
|
||||
## Governance Transparency
|
||||
|
||||
### Governance Structure
|
||||
- **Current Model**: [Description of governance model]
|
||||
- **Decision Making Process**: [How decisions are made]
|
||||
- **Community Participation**: [Governance participation metrics]
|
||||
|
||||
### Key Governance Actions
|
||||
| Date | Action | Outcome | Community Feedback |
|
||||
|------|--------|---------|-------------------|
|
||||
| [Date] | [Proposal/Decision] | [Result] | [Summary] |
|
||||
| [Date] | [Proposal/Decision] | [Result] | [Summary] |
|
||||
|
||||
### Treasury & Financial Transparency
|
||||
- **Total Treasury**: [Amount] AITBC
|
||||
- **Annual Expenditure**: [Amount] AITBC
|
||||
- **Funding Sources**: [Breakdown]
|
||||
- **Expense Categories**: [Breakdown]
|
||||
|
||||
#### Budget Allocation
|
||||
| Category | Budgeted | Actual | Variance | Notes |
|
||||
|----------|----------|--------|----------|-------|
|
||||
| Development | [Amount] | [Amount] | [Amount] | [Explanation] |
|
||||
| Operations | [Amount] | [Amount] | [Amount] | [Explanation] |
|
||||
| Community | [Amount] | [Amount] | [Amount] | [Explanation] |
|
||||
| Research | [Amount] | [Amount] | [Amount] | [Explanation] |
|
||||
|
||||
## Technical Development
|
||||
|
||||
### Protocol Updates
|
||||
#### Major Releases
|
||||
- [Version] - [Date] - [Key Features]
|
||||
- [Version] - [Date] - [Key Features]
|
||||
- [Version] - [Date] - [Key Features]
|
||||
|
||||
#### Research & Development
|
||||
- **Research Papers Published**: [Number]
|
||||
- **Prototypes Developed**: [Number]
|
||||
- **Patents Filed**: [Number]
|
||||
- **Open Source Contributions**: [Details]
|
||||
|
||||
### Security & Reliability
|
||||
- **Security Audits**: [Number] completed
|
||||
- **Critical Issues**: [Number] found and fixed
|
||||
- **Uptime**: [Percentage]
|
||||
- **Incidents**: [Number] with details
|
||||
|
||||
### Performance Metrics
|
||||
| Metric | Target | Actual | Status |
|
||||
|--------|--------|--------|--------|
|
||||
| TPS | [Target] | [Actual] | ✅/⚠️/❌ |
|
||||
| Block Time | [Target] | [Actual] | ✅/⚠️/❌ |
|
||||
| Finality | [Target] | [Actual] | ✅/⚠️/❌ |
|
||||
| Gas Efficiency | [Target] | [Actual] | ✅/⚠️/❌ |
|
||||
|
||||
## Ecosystem Health
|
||||
|
||||
### Network Statistics
|
||||
- **Total Transactions**: [Number]
|
||||
- **Active Addresses**: [Number]
|
||||
- **Total Value Locked (TVL)**: [Amount]
|
||||
- **Cross-Chain Volume**: [Amount]
|
||||
- **Marketplaces**: [Number]
|
||||
|
||||
### Developer Ecosystem
|
||||
- **Active Developers**: [Number]
|
||||
- **Projects Built**: [Number]
|
||||
- **GitHub Stars**: [Number]
|
||||
- **Developer Grants Awarded**: [Number]
|
||||
|
||||
### Community Metrics
|
||||
- **Community Members**: [Discord/Telegram/etc.]
|
||||
- **Monthly Active Users**: [Number]
|
||||
- **Social Media Engagement**: [Metrics]
|
||||
- **Event Participation**: [Number of events, attendance]
|
||||
|
||||
### Geographic Distribution
|
||||
| Region | Users | Developers | Partners | Growth |
|
||||
|--------|-------|------------|----------|--------|
|
||||
| North America | [Number] | [Number] | [Number] | [%] |
|
||||
| Europe | [Number] | [Number] | [Number] | [%] |
|
||||
| Asia Pacific | [Number] | [Number] | [Number] | [%] |
|
||||
| Other | [Number] | [Number] | [Number] | [%] |
|
||||
|
||||
## Sustainability Metrics
|
||||
|
||||
### Environmental Impact
|
||||
- **Energy Consumption**: [kWh/year]
|
||||
- **Carbon Footprint**: [tCO2/year]
|
||||
- **Renewable Energy Usage**: [Percentage]
|
||||
- **Efficiency Improvements**: [Year-over-year change]
|
||||
|
||||
### Economic Sustainability
|
||||
- **Revenue Streams**: [Breakdown]
|
||||
- **Cost Optimization**: [Achievements]
|
||||
- **Long-term Funding**: [Strategy]
|
||||
- **Risk Management**: [Approach]
|
||||
|
||||
### Social Impact
|
||||
- **Education Programs**: [Number of participants]
|
||||
- **Accessibility Features**: [Improvements]
|
||||
- **Inclusion Initiatives**: [Programs launched]
|
||||
- **Community Benefits**: [Stories/examples]
|
||||
|
||||
## Partnerships & Collaborations
|
||||
|
||||
### Strategic Partners
|
||||
| Partner | Type | Since | Key Achievements |
|
||||
|---------|------|-------|-----------------|
|
||||
| [Partner] | [Type] | [Year] | [Achievements] |
|
||||
| [Partner] | [Type] | [Year] | [Achievements] |
|
||||
|
||||
### Academic Collaborations
|
||||
- **University Partnerships**: [Number]
|
||||
- **Research Projects**: [Number]
|
||||
- **Student Programs**: [Participants]
|
||||
- **Publications**: [Number]
|
||||
|
||||
### Industry Alliances
|
||||
- **Consortium Members**: [Number]
|
||||
- **Working Groups**: [Active groups]
|
||||
- **Standardization Efforts**: [Contributions]
|
||||
- **Joint Initiatives**: [Projects]
|
||||
|
||||
## Compliance & Legal
|
||||
|
||||
### Regulatory Compliance
|
||||
- **Jurisdictions**: [Countries/regions of operation]
|
||||
- **Licenses**: [Held licenses]
|
||||
- **Compliance Programs**: [Active programs]
|
||||
- **Audits**: [Results]
|
||||
|
||||
### Data Privacy
|
||||
- **Privacy Policy Updates**: [Changes made]
|
||||
- **Data Protection**: [Measures implemented]
|
||||
- **User Rights**: [Enhancements]
|
||||
- **Incidents**: [Any breaches/issues]
|
||||
|
||||
### Intellectual Property
|
||||
- **Patents**: [Portfolio summary]
|
||||
- **Trademarks**: [Registered marks]
|
||||
- **Open Source**: [Licenses used]
|
||||
- **Contributions**: [Policy]
|
||||
|
||||
## Risk Management
|
||||
|
||||
### Identified Risks
|
||||
| Risk Category | Risk Level | Mitigation | Status |
|
||||
|---------------|------------|------------|--------|
|
||||
| Technical | [Level] | [Strategy] | [Status] |
|
||||
| Market | [Level] | [Strategy] | [Status] |
|
||||
| Regulatory | [Level] | [Strategy] | [Status] |
|
||||
| Operational | [Level] | [Strategy] | [Status] |
|
||||
|
||||
### Incident Response
|
||||
- **Security Incidents**: [Number] with details
|
||||
- **Response Time**: [Average time]
|
||||
- **Recovery Time**: [Average time]
|
||||
- **Lessons Learned**: [Key takeaways]
|
||||
|
||||
## Community Feedback & Engagement
|
||||
|
||||
### Feedback Channels
|
||||
- **Proposals Received**: [Number]
|
||||
- **Community Votes**: [Number]
|
||||
- **Feedback Implementation Rate**: [Percentage]
|
||||
- **Response Time**: [Average time]
|
||||
|
||||
### Major Community Initiatives
|
||||
- [Initiative 1] - [Participation] - [Outcome]
|
||||
- [Initiative 2] - [Participation] - [Outcome]
|
||||
- [Initiative 3] - [Participation] - [Outcome]
|
||||
|
||||
### Challenges & Concerns
|
||||
- **Top Issues Raised**: [Summary]
|
||||
- **Actions Taken**: [Responses]
|
||||
- **Ongoing Concerns**: [Status]
|
||||
|
||||
## Future Outlook
|
||||
|
||||
### Next Year Goals
|
||||
1. [Goal 1] - [Success criteria]
|
||||
2. [Goal 2] - [Success criteria]
|
||||
3. [Goal 3] - [Success criteria]
|
||||
|
||||
### Strategic Priorities
|
||||
- [Priority 1] - [Rationale]
|
||||
- [Priority 2] - [Rationale]
|
||||
- [Priority 3] - [Rationale]
|
||||
|
||||
### Resource Allocation
|
||||
- **Development**: [Planned investment]
|
||||
- **Community**: [Planned investment]
|
||||
- **Research**: [Planned investment]
|
||||
- **Operations**: [Planned investment]
|
||||
|
||||
## Acknowledgments
|
||||
|
||||
### Contributors
|
||||
- **Core Team**: [Number of contributors]
|
||||
- **Community Contributors**: [Number]
|
||||
- **Top Contributors**: [Recognition]
|
||||
|
||||
### Special Thanks
|
||||
- [Individual/Organization 1]
|
||||
- [Individual/Organization 2]
|
||||
- [Individual/Organization 3]
|
||||
|
||||
## Appendices
|
||||
|
||||
### A. Detailed Financial Statements
|
||||
[Link to detailed financial reports]
|
||||
|
||||
### B. Technical Specifications
|
||||
[Link to technical documentation]
|
||||
|
||||
### C. Governance Records
|
||||
[Link to governance documentation]
|
||||
|
||||
### D. Community Survey Results
|
||||
[Key findings from community surveys]
|
||||
|
||||
### E. Third-Party Audits
|
||||
[Links to audit reports]
|
||||
|
||||
---
|
||||
|
||||
## Contact & Verification
|
||||
|
||||
### Verification
|
||||
- **Financial Audit**: [Auditor] - [Report link]
|
||||
- **Technical Audit**: [Auditor] - [Report link]
|
||||
- **Security Audit**: [Auditor] - [Report link]
|
||||
|
||||
### Contact Information
|
||||
- **Transparency Questions**: transparency@aitbc.io
|
||||
- **General Inquiries**: info@aitbc.io
|
||||
- **Security Issues**: security@aitbc.io
|
||||
- **Media Inquiries**: media@aitbc.io
|
||||
|
||||
### Document Information
|
||||
- **Version**: [Version number]
|
||||
- **Last Updated**: [Date]
|
||||
- **Next Report Due**: [Date]
|
||||
- **Archive**: [Link to past reports]
|
||||
|
||||
---
|
||||
|
||||
*This transparency report is published annually as part of AITBC's commitment to openness and accountability. All data presented is accurate to the best of our knowledge. For questions or clarifications, please contact us at transparency@aitbc.io.*
|
||||
49
docs/user-guide/creating-jobs.md
Normal file
49
docs/user-guide/creating-jobs.md
Normal file
@ -0,0 +1,49 @@
|
||||
---
|
||||
title: Creating Jobs
|
||||
description: Learn how to create and submit AI jobs
|
||||
---
|
||||
|
||||
# Creating Jobs
|
||||
|
||||
Jobs are the primary way to execute AI workloads on the AITBC platform.
|
||||
|
||||
## Job Types
|
||||
|
||||
- **AI Inference**: Run pre-trained models
|
||||
- **Model Training**: Train new models
|
||||
- **Data Processing**: Process datasets
|
||||
- **Custom**: Custom computations
|
||||
|
||||
## Job Specification
|
||||
|
||||
A job specification includes:
|
||||
- Model configuration
|
||||
- Input/output formats
|
||||
- Resource requirements
|
||||
- Pricing constraints
|
||||
|
||||
## Example
|
||||
|
||||
```yaml
|
||||
name: "image-classification"
|
||||
type: "ai-inference"
|
||||
model:
|
||||
type: "python"
|
||||
entrypoint: "model.py"
|
||||
```
|
||||
|
||||
## Submitting Jobs
|
||||
|
||||
Use the CLI or API to submit jobs:
|
||||
|
||||
```bash
|
||||
aitbc job submit job.yaml
|
||||
```
|
||||
|
||||
## Monitoring
|
||||
|
||||
Track job progress through:
|
||||
- CLI commands
|
||||
- Web interface
|
||||
- API endpoints
|
||||
- WebSocket streams
|
||||
46
docs/user-guide/explorer.md
Normal file
46
docs/user-guide/explorer.md
Normal file
@ -0,0 +1,46 @@
|
||||
---
|
||||
title: Explorer
|
||||
description: Using the AITBC blockchain explorer
|
||||
---
|
||||
|
||||
# Explorer
|
||||
|
||||
The AITBC explorer allows you to browse and search the blockchain for transactions, jobs, and other activities.
|
||||
|
||||
## Features
|
||||
|
||||
### Transaction Search
|
||||
- Search by transaction hash
|
||||
- Filter by address
|
||||
- View transaction details
|
||||
|
||||
### Job Tracking
|
||||
- Monitor job status
|
||||
- View job history
|
||||
- Analyze performance
|
||||
|
||||
### Analytics
|
||||
- Network statistics
|
||||
- Volume metrics
|
||||
- Activity charts
|
||||
|
||||
## Using the Explorer
|
||||
|
||||
### Web Interface
|
||||
Visit [https://explorer.aitbc.io](https://explorer.aitbc.io)
|
||||
|
||||
### API Access
|
||||
```bash
|
||||
# Get transaction
|
||||
curl https://api.aitbc.io/v1/transactions/{tx_hash}
|
||||
|
||||
# Get job details
|
||||
curl https://api.aitbc.io/v1/jobs/{job_id}
|
||||
```
|
||||
|
||||
## Advanced Features
|
||||
|
||||
- Real-time updates
|
||||
- Custom dashboards
|
||||
- Data export
|
||||
- Alert notifications
|
||||
46
docs/user-guide/marketplace.md
Normal file
46
docs/user-guide/marketplace.md
Normal file
@ -0,0 +1,46 @@
|
||||
---
|
||||
title: Marketplace
|
||||
description: Using the AITBC marketplace
|
||||
---
|
||||
|
||||
# Marketplace
|
||||
|
||||
The AITBC marketplace connects job creators with miners who can execute their AI workloads.
|
||||
|
||||
## How It Works
|
||||
|
||||
1. **Job Creation**: Users create jobs with specific requirements
|
||||
2. **Offer Matching**: The marketplace finds suitable miners
|
||||
3. **Execution**: Miners execute the jobs and submit results
|
||||
4. **Payment**: Automatic payment upon successful completion
|
||||
|
||||
## Finding Services
|
||||
|
||||
Browse available services:
|
||||
- By job type
|
||||
- By price range
|
||||
- By miner reputation
|
||||
- By resource requirements
|
||||
|
||||
## Pricing
|
||||
|
||||
Dynamic pricing based on:
|
||||
- Market demand
|
||||
- Resource availability
|
||||
- Miner reputation
|
||||
- Job complexity
|
||||
|
||||
## Creating Offers
|
||||
|
||||
As a miner, you can:
|
||||
- Set your prices
|
||||
- Specify job types
|
||||
- Define resource limits
|
||||
- Build reputation
|
||||
|
||||
## Safety Features
|
||||
|
||||
- Escrow payments
|
||||
- Dispute resolution
|
||||
- Reputation system
|
||||
- Cryptographic proofs
|
||||
27
docs/user-guide/overview.md
Normal file
27
docs/user-guide/overview.md
Normal file
@ -0,0 +1,27 @@
|
||||
---
|
||||
title: User Guide Overview
|
||||
description: Learn how to use AITBC as a user
|
||||
---
|
||||
|
||||
# User Guide Overview
|
||||
|
||||
Welcome to the AITBC user guide! This section will help you understand how to interact with the AITBC platform.
|
||||
|
||||
## What You'll Learn
|
||||
|
||||
- Creating and submitting AI jobs
|
||||
- Using the marketplace
|
||||
- Managing your wallet
|
||||
- Monitoring your jobs
|
||||
- Understanding receipts and proofs
|
||||
|
||||
## Getting Started
|
||||
|
||||
If you're new to AITBC, start with the [Quickstart Guide](../getting-started/quickstart.md).
|
||||
|
||||
## Navigation
|
||||
|
||||
- [Creating Jobs](creating-jobs.md) - Learn to submit AI workloads
|
||||
- [Marketplace](marketplace.md) - Buy and sell AI services
|
||||
- [Explorer](explorer.md) - Browse the blockchain
|
||||
- [Wallet Management](wallet-management.md) - Manage your funds
|
||||
65
docs/user-guide/wallet-management.md
Normal file
65
docs/user-guide/wallet-management.md
Normal file
@ -0,0 +1,65 @@
|
||||
---
|
||||
title: Wallet Management
|
||||
description: Managing your AITBC wallet
|
||||
---
|
||||
|
||||
# Wallet Management
|
||||
|
||||
Your AITBC wallet allows you to store, send, and receive AITBC tokens and interact with the platform.
|
||||
|
||||
## Creating a Wallet
|
||||
|
||||
### New Wallet
|
||||
```bash
|
||||
aitbc wallet create
|
||||
```
|
||||
|
||||
### Import Existing
|
||||
```bash
|
||||
aitbc wallet import <private_key>
|
||||
```
|
||||
|
||||
## Wallet Operations
|
||||
|
||||
### Check Balance
|
||||
```bash
|
||||
aitbc wallet balance
|
||||
```
|
||||
|
||||
### Send Tokens
|
||||
```bash
|
||||
aitbc wallet send <address> <amount>
|
||||
```
|
||||
|
||||
### Transaction History
|
||||
```bash
|
||||
aitbc wallet history
|
||||
```
|
||||
|
||||
## Security
|
||||
|
||||
- Never share your private key
|
||||
- Use a hardware wallet for large amounts
|
||||
- Enable two-factor authentication
|
||||
- Keep backups in secure locations
|
||||
|
||||
## Staking
|
||||
|
||||
Earn rewards by staking your tokens:
|
||||
```bash
|
||||
aitbc wallet stake <amount>
|
||||
```
|
||||
|
||||
## Backup
|
||||
|
||||
Always backup your wallet:
|
||||
```bash
|
||||
aitbc wallet backup --output wallet.backup
|
||||
```
|
||||
|
||||
## Recovery
|
||||
|
||||
Restore from backup:
|
||||
```bash
|
||||
aitbc wallet restore --input wallet.backup
|
||||
```
|
||||
52
docs/user/getting-started/architecture.md
Normal file
52
docs/user/getting-started/architecture.md
Normal file
@ -0,0 +1,52 @@
|
||||
---
|
||||
title: Architecture
|
||||
description: Technical architecture of the AITBC platform
|
||||
---
|
||||
|
||||
# Architecture
|
||||
|
||||
## Overview
|
||||
|
||||
AITBC consists of several interconnected components that work together to provide a secure and efficient AI computing platform.
|
||||
|
||||
## Components
|
||||
|
||||
### Coordinator API
|
||||
The central service managing jobs, marketplace operations, and coordination.
|
||||
|
||||
### Blockchain Nodes
|
||||
Maintain the distributed ledger and execute smart contracts.
|
||||
|
||||
### Wallet Daemon
|
||||
Manages cryptographic keys and transactions.
|
||||
|
||||
### Miners/Validators
|
||||
Execute AI computations and secure the network.
|
||||
|
||||
### Explorer
|
||||
Browse blockchain data and transactions.
|
||||
|
||||
## Data Flow
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant U as User
|
||||
participant C as Coordinator
|
||||
participant M as Marketplace
|
||||
participant B as Blockchain
|
||||
participant V as Miner
|
||||
|
||||
U->>C: Submit Job
|
||||
C->>M: Find Offer
|
||||
M->>B: Create Transaction
|
||||
V->>B: Execute Job
|
||||
V->>C: Submit Results
|
||||
C->>U: Return Results
|
||||
```
|
||||
|
||||
## Security Model
|
||||
|
||||
- Cryptographic proofs for all computations
|
||||
- Multi-signature validation
|
||||
- Secure enclave support
|
||||
- Privacy-preserving techniques
|
||||
53
docs/user/getting-started/installation.md
Normal file
53
docs/user/getting-started/installation.md
Normal file
@ -0,0 +1,53 @@
|
||||
---
|
||||
title: Installation
|
||||
description: Install and set up AITBC on your system
|
||||
---
|
||||
|
||||
# Installation
|
||||
|
||||
This guide will help you install AITBC on your system.
|
||||
|
||||
## System Requirements
|
||||
|
||||
- Python 3.8 or higher
|
||||
- Docker and Docker Compose (optional)
|
||||
- 4GB RAM minimum
|
||||
- 10GB disk space
|
||||
|
||||
## Installation Methods
|
||||
|
||||
### Method 1: Docker (Recommended)
|
||||
|
||||
```bash
|
||||
# Clone the repository
|
||||
git clone https://github.com/aitbc/aitbc.git
|
||||
cd aitbc
|
||||
|
||||
# Start with Docker Compose
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
### Method 2: pip Install
|
||||
|
||||
```bash
|
||||
# Install the CLI
|
||||
pip install aitbc-cli
|
||||
|
||||
# Verify installation
|
||||
aitbc --version
|
||||
```
|
||||
|
||||
### Method 3: From Source
|
||||
|
||||
```bash
|
||||
# Clone repository
|
||||
git clone https://github.com/aitbc/aitbc.git
|
||||
cd aitbc
|
||||
|
||||
# Install in development mode
|
||||
pip install -e .
|
||||
```
|
||||
|
||||
## Next Steps
|
||||
|
||||
After installation, proceed to the [Quickstart Guide](quickstart.md).
|
||||
93
docs/user/getting-started/introduction.md
Normal file
93
docs/user/getting-started/introduction.md
Normal file
@ -0,0 +1,93 @@
|
||||
---
|
||||
title: Introduction to AITBC
|
||||
description: Learn about the AI Trusted Blockchain Computing platform
|
||||
---
|
||||
|
||||
# Introduction to AITBC
|
||||
|
||||
AITBC (AI Trusted Blockchain Computing) is a revolutionary platform that combines artificial intelligence with blockchain technology to create a secure, transparent, and efficient ecosystem for AI computations.
|
||||
|
||||
## What is AITBC?
|
||||
|
||||
AITBC enables:
|
||||
- **Verifiable AI Computations**: Execute AI workloads on the blockchain with cryptographic proofs
|
||||
- **Decentralized Marketplace**: Connect AI service providers with consumers in a trustless environment
|
||||
- **Fair Compensation**: Ensure fair payment for computational resources through smart contracts
|
||||
- **Privacy Preservation**: Maintain data privacy while enabling verification
|
||||
|
||||
## Key Features
|
||||
|
||||
### 🔒 Trust & Security
|
||||
- Cryptographic proofs of computation
|
||||
- Immutable audit trails
|
||||
- Secure multi-party computation
|
||||
|
||||
### ⚡ Performance
|
||||
- High-throughput consensus
|
||||
- GPU-accelerated computations
|
||||
- Optimized for AI workloads
|
||||
|
||||
### 💰 Economics
|
||||
- Token-based incentives
|
||||
- Dynamic pricing
|
||||
- Staking rewards
|
||||
|
||||
### 🌐 Accessibility
|
||||
- Easy-to-use APIs
|
||||
- SDKs for major languages
|
||||
- No blockchain expertise required
|
||||
|
||||
## Use Cases
|
||||
|
||||
### AI Service Providers
|
||||
- Monetize AI models
|
||||
- Reach global customers
|
||||
- Automated payments
|
||||
|
||||
### Data Scientists
|
||||
- Access compute resources
|
||||
- Verify results
|
||||
- Collaborate securely
|
||||
|
||||
### Enterprises
|
||||
- Private AI deployments
|
||||
- Compliance tracking
|
||||
- Cost optimization
|
||||
|
||||
### Developers
|
||||
- Build AI dApps
|
||||
- Integrate blockchain
|
||||
- Create new services
|
||||
|
||||
## Architecture
|
||||
|
||||
```mermaid
|
||||
graph LR
|
||||
A[Users] --> B[Coordinator API]
|
||||
B --> C[Marketplace]
|
||||
B --> D[Blockchain]
|
||||
D --> E[Miners]
|
||||
E --> F[AI Models]
|
||||
G[Wallets] --> B
|
||||
H[Explorer] --> D
|
||||
```
|
||||
|
||||
## Getting Started
|
||||
|
||||
Ready to dive in? Check out our [Quickstart Guide](quickstart.md) to get up and running in minutes.
|
||||
|
||||
## Learn More
|
||||
|
||||
- [Architecture Details](architecture.md)
|
||||
- [Installation Guide](installation.md)
|
||||
- [Developer Documentation](../developer-guide/)
|
||||
- [API Reference](../api/)
|
||||
|
||||
## Community
|
||||
|
||||
Join our community to learn, share, and collaborate:
|
||||
|
||||
- [Discord](https://discord.gg/aitbc)
|
||||
- [GitHub](https://github.com/aitbc)
|
||||
- [Blog](https://blog.aitbc.io)
|
||||
- [Twitter](https://twitter.com/aitbc)
|
||||
311
docs/user/getting-started/quickstart.md
Normal file
311
docs/user/getting-started/quickstart.md
Normal file
@ -0,0 +1,311 @@
|
||||
---
|
||||
title: Quickstart Guide
|
||||
description: Get up and running with AITBC in minutes
|
||||
---
|
||||
|
||||
# Quickstart Guide
|
||||
|
||||
This guide will help you get started with AITBC quickly. You'll learn how to set up a development environment, create your first AI job, and interact with the marketplace.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Before you begin, ensure you have:
|
||||
|
||||
- Python 3.8 or higher
|
||||
- Docker and Docker Compose
|
||||
- Git
|
||||
- A terminal or command line interface
|
||||
- Basic knowledge of AI/ML concepts (optional but helpful)
|
||||
|
||||
## 1. Installation
|
||||
|
||||
### Option A: Using Docker (Recommended)
|
||||
|
||||
The fastest way to get started is with Docker:
|
||||
|
||||
```bash
|
||||
# Clone the AITBC repository
|
||||
git clone https://github.com/aitbc/aitbc.git
|
||||
cd aitbc
|
||||
|
||||
# Start all services with Docker Compose
|
||||
docker-compose up -d
|
||||
|
||||
# Wait for services to be ready (takes 2-3 minutes)
|
||||
docker-compose logs -f
|
||||
```
|
||||
|
||||
### Option B: Local Development
|
||||
|
||||
For local development, install components individually:
|
||||
|
||||
```bash
|
||||
# Install the AITBC CLI
|
||||
pip install aitbc-cli
|
||||
|
||||
# Initialize a new project
|
||||
aitbc init my-ai-project
|
||||
cd my-ai-project
|
||||
|
||||
# Start local services
|
||||
aitbc dev start
|
||||
```
|
||||
|
||||
## 2. Verify Installation
|
||||
|
||||
Check that everything is working:
|
||||
|
||||
```bash
|
||||
# Check coordinator API health
|
||||
curl http://localhost:8011/v1/health
|
||||
|
||||
# Expected response:
|
||||
# {"status":"ok","env":"dev"}
|
||||
```
|
||||
|
||||
## 3. Create Your First AI Job
|
||||
|
||||
### Step 1: Prepare Your AI Model
|
||||
|
||||
Create a simple Python script for your AI model:
|
||||
|
||||
```python
|
||||
# model.py
|
||||
import numpy as np
|
||||
from typing import Dict, Any
|
||||
|
||||
def process_image(image_data: bytes) -> Dict[str, Any]:
|
||||
"""Process an image and return results"""
|
||||
# Your AI processing logic here
|
||||
# This is a simple example
|
||||
result = {
|
||||
"prediction": "cat",
|
||||
"confidence": 0.95,
|
||||
"processing_time": 0.123
|
||||
}
|
||||
return result
|
||||
|
||||
if __name__ == "__main__":
|
||||
import sys
|
||||
with open(sys.argv[1], 'rb') as f:
|
||||
data = f.read()
|
||||
result = process_image(data)
|
||||
print(result)
|
||||
```
|
||||
|
||||
### Step 2: Create a Job Specification
|
||||
|
||||
Create a job file:
|
||||
|
||||
```yaml
|
||||
# job.yaml
|
||||
name: "image-classification"
|
||||
description: "Classify images using AI model"
|
||||
type: "ai-inference"
|
||||
|
||||
model:
|
||||
type: "python"
|
||||
entrypoint: "model.py"
|
||||
requirements:
|
||||
- numpy==1.21.0
|
||||
- pillow==8.3.0
|
||||
- torch==1.9.0
|
||||
|
||||
input:
|
||||
type: "image"
|
||||
format: "jpeg"
|
||||
max_size: "10MB"
|
||||
|
||||
output:
|
||||
type: "json"
|
||||
schema:
|
||||
prediction: string
|
||||
confidence: float
|
||||
processing_time: float
|
||||
|
||||
resources:
|
||||
cpu: "1000m"
|
||||
memory: "2Gi"
|
||||
gpu: "1"
|
||||
|
||||
pricing:
|
||||
max_cost: "0.10"
|
||||
per_inference: "0.001"
|
||||
```
|
||||
|
||||
### Step 3: Submit the Job
|
||||
|
||||
Submit your job to the marketplace:
|
||||
|
||||
```bash
|
||||
# Using the CLI
|
||||
aitbc job submit job.yaml
|
||||
|
||||
# Or using curl directly
|
||||
curl -X POST http://localhost:8011/v1/jobs \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "X-API-Key: your-api-key" \
|
||||
-d @job.json
|
||||
```
|
||||
|
||||
You'll receive a job ID in response:
|
||||
```json
|
||||
{
|
||||
"job_id": "job_1234567890",
|
||||
"status": "submitted",
|
||||
"estimated_completion": "2024-01-01T12:00:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
## 4. Monitor Job Progress
|
||||
|
||||
Track your job's progress:
|
||||
|
||||
```bash
|
||||
# Check job status
|
||||
aitbc job status job_1234567890
|
||||
|
||||
# Stream logs
|
||||
aitbc job logs job_1234567890 --follow
|
||||
```
|
||||
|
||||
## 5. Get Results
|
||||
|
||||
Once the job completes, retrieve the results:
|
||||
|
||||
```bash
|
||||
# Get job results
|
||||
aitbc job results job_1234567890
|
||||
|
||||
# Download output files
|
||||
aitbc job download job_1234567890 --output ./results/
|
||||
```
|
||||
|
||||
## 6. Interact with the Marketplace
|
||||
|
||||
### Browse Available Services
|
||||
|
||||
```bash
|
||||
# List all available services
|
||||
aitbc marketplace list
|
||||
|
||||
# Search for specific services
|
||||
aitbc marketplace search --type "image-classification"
|
||||
```
|
||||
|
||||
### Use a Service
|
||||
|
||||
```bash
|
||||
# Use a service directly
|
||||
aitbc marketplace use service_456 \
|
||||
--input ./test-image.jpg \
|
||||
--output ./result.json
|
||||
```
|
||||
|
||||
## 7. Set Up a Wallet
|
||||
|
||||
Create a wallet to manage payments and rewards:
|
||||
|
||||
```bash
|
||||
# Create a new wallet
|
||||
aitbc wallet create
|
||||
|
||||
# Get wallet address
|
||||
aitbc wallet address
|
||||
|
||||
# Check balance
|
||||
aitbc wallet balance
|
||||
|
||||
# Fund your wallet (testnet only)
|
||||
aitbc wallet fund --amount 10
|
||||
```
|
||||
|
||||
## 8. Become a Miner
|
||||
|
||||
Run a miner to earn rewards:
|
||||
|
||||
```bash
|
||||
# Configure mining settings
|
||||
aitbc miner config \
|
||||
--gpu-count 1 \
|
||||
--max-jobs 5
|
||||
|
||||
# Start mining
|
||||
aitbc miner start
|
||||
|
||||
# Check mining status
|
||||
aitbc miner status
|
||||
```
|
||||
|
||||
## Next Steps
|
||||
|
||||
Congratulations! You've successfully:
|
||||
- ✅ Set up AITBC
|
||||
- ✅ Created and submitted an AI job
|
||||
- ✅ Interacted with the marketplace
|
||||
- ✅ Set up a wallet
|
||||
- ✅ Started mining
|
||||
|
||||
### What to explore next:
|
||||
|
||||
1. **Advanced Job Configuration**
|
||||
- Learn about [complex job types](user-guide/creating-jobs.md#advanced-jobs)
|
||||
- Explore [resource optimization](user-guide/creating-jobs.md#optimization)
|
||||
|
||||
2. **Marketplace Features**
|
||||
- Read about [pricing strategies](user-guide/marketplace.md#pricing)
|
||||
- Understand [reputation system](user-guide/marketplace.md#reputation)
|
||||
|
||||
3. **Development**
|
||||
- Check out the [Python SDK](developer-guide/sdks/python.md)
|
||||
- Explore [API documentation](api/coordinator/endpoints.md)
|
||||
|
||||
4. **Production Deployment**
|
||||
- Learn about [deployment strategies](operations/deployment.md)
|
||||
- Set up [monitoring](operations/monitoring.md)
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
**Service won't start**
|
||||
```bash
|
||||
# Check Docker logs
|
||||
docker-compose logs coordinator
|
||||
|
||||
# Restart services
|
||||
docker-compose restart
|
||||
```
|
||||
|
||||
**Job submission fails**
|
||||
```bash
|
||||
# Verify API key
|
||||
aitbc auth verify
|
||||
|
||||
# Check service status
|
||||
aitbc status
|
||||
```
|
||||
|
||||
**Connection errors**
|
||||
```bash
|
||||
# Check network connectivity
|
||||
curl -v http://localhost:8011/v1/health
|
||||
|
||||
# Verify configuration
|
||||
aitbc config show
|
||||
```
|
||||
|
||||
### Get Help
|
||||
|
||||
- 📖 [Full Documentation](../)
|
||||
- 💬 [Discord Community](https://discord.gg/aitbc)
|
||||
- 🐛 [Report Issues](https://github.com/aitbc/issues)
|
||||
- 📧 [Email Support](mailto:support@aitbc.io)
|
||||
|
||||
---
|
||||
|
||||
!!! tip "Pro Tip"
|
||||
Join our [Discord](https://discord.gg/aitbc) to connect with other developers and get real-time help from the AITBC team.
|
||||
|
||||
!!! note "Testnet vs Mainnet"
|
||||
This quickstart uses the AITBC testnet. All transactions are free and don't involve real money. When you're ready for production, switch to mainnet with `aitbc config set network mainnet`.
|
||||
117
docs/user/index.md
Normal file
117
docs/user/index.md
Normal file
@ -0,0 +1,117 @@
|
||||
---
|
||||
title: Welcome to AITBC
|
||||
description: AI Trusted Blockchain Computing Platform - Secure, scalable, and developer-friendly blockchain infrastructure for AI workloads
|
||||
---
|
||||
|
||||
# Welcome to AITBC Documentation
|
||||
|
||||
!!! tip "New to AITBC?"
|
||||
Start with our [Quickstart Guide](getting-started/quickstart.md) to get up and running in minutes.
|
||||
|
||||
AITBC (AI Trusted Blockchain Computing) is a next-generation blockchain platform specifically designed for AI workloads. It provides a secure, scalable, and developer-friendly infrastructure for running AI computations on the blockchain with verifiable proofs.
|
||||
|
||||
## 🚀 Key Features
|
||||
|
||||
### **AI-Native Design**
|
||||
- Built from the ground up for AI workloads
|
||||
- Support for machine learning model execution
|
||||
- Verifiable computation proofs for AI results
|
||||
- GPU-accelerated computing capabilities
|
||||
|
||||
### **Marketplace Integration**
|
||||
- Decentralized marketplace for AI services
|
||||
- Transparent pricing and reputation system
|
||||
- Smart contract-based job execution
|
||||
- Automated dispute resolution
|
||||
|
||||
### **Developer-Friendly**
|
||||
- RESTful APIs with OpenAPI specifications
|
||||
- SDK support for Python and JavaScript
|
||||
- Comprehensive documentation and examples
|
||||
- Easy integration with existing AI/ML pipelines
|
||||
|
||||
### **Enterprise-Ready**
|
||||
- High-performance consensus mechanism
|
||||
- Horizontal scaling capabilities
|
||||
- Comprehensive monitoring and observability
|
||||
- Security-hardened infrastructure
|
||||
|
||||
## 🏛️ Architecture Overview
|
||||
|
||||
```mermaid
|
||||
graph TB
|
||||
subgraph "AITBC Ecosystem"
|
||||
A[Client Applications] --> B[Coordinator API]
|
||||
B --> C[Marketplace]
|
||||
B --> D[Blockchain Nodes]
|
||||
D --> E[Miners/Validators]
|
||||
D --> F[Ledger Storage]
|
||||
G[Wallet Daemon] --> B
|
||||
H[Explorer] --> D
|
||||
end
|
||||
|
||||
subgraph "External Services"
|
||||
I[AI/ML Models] --> D
|
||||
J[Storage Systems] --> D
|
||||
K[Oracles] --> D
|
||||
end
|
||||
```
|
||||
|
||||
## 📚 What's in this Documentation
|
||||
|
||||
### For Users
|
||||
- [Getting Started](getting-started/) - Learn the basics and get running quickly
|
||||
- [User Guide](../user-guide/) - Comprehensive guide to using AITBC features
|
||||
- [Tutorials](../developer/tutorials/) - Step-by-step guides for common tasks
|
||||
|
||||
### For Developers
|
||||
- [Developer Guide](../developer/) - Set up your development environment
|
||||
- [API Reference](../developer/api/) - Detailed API documentation
|
||||
- [SDKs](../developer/sdks/) - Python and JavaScript SDK guides
|
||||
|
||||
### For Operators
|
||||
- [Operations Guide](../operator/) - Deployment and maintenance
|
||||
- [Security](../operator/security.md) - Security best practices
|
||||
- [Monitoring](../operator/monitoring/) - Observability setup
|
||||
|
||||
### For Ecosystem Participants
|
||||
- [Hackathons](../ecosystem/hackathons/) - Join our developer events
|
||||
- [Grants](../ecosystem/grants/) - Apply for ecosystem funding
|
||||
- [Certification](../ecosystem/certification/) - Get your solution certified
|
||||
|
||||
## 🎯 Quick Links
|
||||
|
||||
| Resource | Description | Link |
|
||||
|----------|-------------|------|
|
||||
| **Try AITBC** | Interactive demo environment | [Demo Portal](https://demo.aitbc.io) |
|
||||
| **GitHub** | Source code and contributions | [github.com/aitbc](https://github.com/aitbc) |
|
||||
| **Discord** | Community support | [Join our Discord](https://discord.gg/aitbc) |
|
||||
| **Blog** | Latest updates and tutorials | [AITBC Blog](https://blog.aitbc.io) |
|
||||
|
||||
## 🆘 Getting Help
|
||||
|
||||
!!! question "Need assistance?"
|
||||
- 📖 Check our [FAQ](resources/faq.md) for common questions
|
||||
- 💬 Join our [Discord community](https://discord.gg/aitbc) for real-time support
|
||||
- 🐛 Report issues on [GitHub](https://github.com/aitbc/issues)
|
||||
- 📧 Email us at [support@aitbc.io](mailto:support@aitbc.io)
|
||||
|
||||
## 🌟 Contributing
|
||||
|
||||
We welcome contributions from the community! Whether you're fixing bugs, improving documentation, or proposing new features, we'd love to have you involved.
|
||||
|
||||
Check out our [Contributing Guide](developer-guide/contributing.md) to get started.
|
||||
|
||||
---
|
||||
|
||||
!!! info "Stay Updated"
|
||||
Subscribe to our newsletter for the latest updates, releases, and community news.
|
||||
|
||||
[Subscribe Now](https://aitbc.io/newsletter)
|
||||
|
||||
---
|
||||
|
||||
<div align="center">
|
||||
<p>Built with ❤️ by the AITBC Team</p>
|
||||
<p><a href="https://github.com/aitbc/docs/blob/main/LICENSE">License</a> | <a href="https://aitbc.io/privacy">Privacy Policy</a> | <a href="https://aitbc.io/terms">Terms of Service</a></p>
|
||||
</div>
|
||||
Reference in New Issue
Block a user