chore: standardize configuration, logging, and error handling across blockchain node and coordinator API
- Add infrastructure.md and workflow files to .gitignore to prevent sensitive info leaks - Change blockchain node mempool backend default from memory to database for persistence - Refactor blockchain node logger with StructuredLogFormatter and AuditLogger (consistent with coordinator) - Add structured logging fields: service, module, function, line number - Unify coordinator config with Database
This commit is contained in:
@@ -1,403 +0,0 @@
|
||||
# Cross-Chain Settlement Hooks Design
|
||||
|
||||
## Overview
|
||||
|
||||
This document outlines the architecture for cross-chain settlement hooks in AITBC, enabling job receipts and proofs to be settled across multiple blockchains using various bridge protocols.
|
||||
|
||||
## Architecture
|
||||
|
||||
### Core Components
|
||||
|
||||
```
|
||||
┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
|
||||
│ AITBC Chain │ │ Settlement Hooks │ │ Target Chains │
|
||||
│ │ │ │ │ │
|
||||
│ - Job Receipts │───▶│ - Bridge Manager │───▶│ - Ethereum │
|
||||
│ - Proofs │ │ - Adapters │ │ - Polygon │
|
||||
│ - Payments │ │ - Router │ │ - BSC │
|
||||
│ │ │ - Validator │ │ - Arbitrum │
|
||||
└─────────────────┘ └──────────────────┘ └─────────────────┘
|
||||
```
|
||||
|
||||
### Settlement Hook Interface
|
||||
|
||||
```python
|
||||
from abc import ABC, abstractmethod
|
||||
from typing import Dict, Any, List
|
||||
from dataclasses import dataclass
|
||||
|
||||
@dataclass
|
||||
class SettlementMessage:
|
||||
"""Message to be settled across chains"""
|
||||
source_chain_id: int
|
||||
target_chain_id: int
|
||||
job_id: str
|
||||
receipt_hash: str
|
||||
proof_data: Dict[str, Any]
|
||||
payment_amount: int
|
||||
payment_token: str
|
||||
nonce: int
|
||||
signature: str
|
||||
|
||||
class BridgeAdapter(ABC):
|
||||
"""Abstract interface for bridge adapters"""
|
||||
|
||||
@abstractmethod
|
||||
async def send_message(self, message: SettlementMessage) -> str:
|
||||
"""Send message to target chain"""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
async def verify_delivery(self, message_id: str) -> bool:
|
||||
"""Verify message was delivered"""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
async def estimate_cost(self, message: SettlementMessage) -> Dict[str, int]:
|
||||
"""Estimate bridge fees"""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def get_supported_chains(self) -> List[int]:
|
||||
"""Get list of supported target chains"""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def get_max_message_size(self) -> int:
|
||||
"""Get maximum message size in bytes"""
|
||||
pass
|
||||
```
|
||||
|
||||
### Bridge Manager
|
||||
|
||||
```python
|
||||
class BridgeManager:
|
||||
"""Manages multiple bridge adapters"""
|
||||
|
||||
def __init__(self):
|
||||
self.adapters: Dict[str, BridgeAdapter] = {}
|
||||
self.default_adapter: str = None
|
||||
|
||||
def register_adapter(self, name: str, adapter: BridgeAdapter):
|
||||
"""Register a bridge adapter"""
|
||||
self.adapters[name] = adapter
|
||||
|
||||
async def settle_cross_chain(
|
||||
self,
|
||||
message: SettlementMessage,
|
||||
bridge_name: str = None
|
||||
) -> str:
|
||||
"""Settle message across chains"""
|
||||
adapter = self._get_adapter(bridge_name)
|
||||
|
||||
# Validate message
|
||||
self._validate_message(message, adapter)
|
||||
|
||||
# Send message
|
||||
message_id = await adapter.send_message(message)
|
||||
|
||||
# Store settlement record
|
||||
await self._store_settlement(message_id, message)
|
||||
|
||||
return message_id
|
||||
|
||||
def _get_adapter(self, bridge_name: str = None) -> BridgeAdapter:
|
||||
"""Get bridge adapter"""
|
||||
if bridge_name:
|
||||
return self.adapters[bridge_name]
|
||||
return self.adapters[self.default_adapter]
|
||||
```
|
||||
|
||||
## Bridge Implementations
|
||||
|
||||
### 1. LayerZero Adapter
|
||||
|
||||
```python
|
||||
class LayerZeroAdapter(BridgeAdapter):
|
||||
"""LayerZero bridge adapter"""
|
||||
|
||||
def __init__(self, endpoint_address: str, chain_id: int):
|
||||
self.endpoint = endpoint_address
|
||||
self.chain_id = chain_id
|
||||
self.contract = self._load_contract()
|
||||
|
||||
async def send_message(self, message: SettlementMessage) -> str:
|
||||
"""Send via LayerZero"""
|
||||
# Encode settlement data
|
||||
payload = self._encode_payload(message)
|
||||
|
||||
# Estimate fees
|
||||
fees = await self._estimate_fees(message)
|
||||
|
||||
# Send transaction
|
||||
tx = await self.contract.send(
|
||||
message.target_chain_id,
|
||||
self._get_target_address(message.target_chain_id),
|
||||
payload,
|
||||
message.payment_amount,
|
||||
message.payment_token,
|
||||
fees
|
||||
)
|
||||
|
||||
return tx.hash
|
||||
|
||||
def _encode_payload(self, message: SettlementMessage) -> bytes:
|
||||
"""Encode message for LayerZero"""
|
||||
return abi.encode(
|
||||
['uint256', 'bytes32', 'bytes', 'uint256', 'address'],
|
||||
[
|
||||
message.job_id,
|
||||
message.receipt_hash,
|
||||
json.dumps(message.proof_data),
|
||||
message.payment_amount,
|
||||
message.payment_token
|
||||
]
|
||||
)
|
||||
```
|
||||
|
||||
### 2. Chainlink CCIP Adapter
|
||||
|
||||
```python
|
||||
class ChainlinkCCIPAdapter(BridgeAdapter):
|
||||
"""Chainlink CCIP bridge adapter"""
|
||||
|
||||
def __init__(self, router_address: str, chain_id: int):
|
||||
self.router = router_address
|
||||
self.chain_id = chain_id
|
||||
self.contract = self._load_contract()
|
||||
|
||||
async def send_message(self, message: SettlementMessage) -> str:
|
||||
"""Send via Chainlink CCIP"""
|
||||
# Create CCIP message
|
||||
ccip_message = {
|
||||
'receiver': self._get_target_address(message.target_chain_id),
|
||||
'data': self._encode_payload(message),
|
||||
'tokenAmounts': [{
|
||||
'token': message.payment_token,
|
||||
'amount': message.payment_amount
|
||||
}]
|
||||
}
|
||||
|
||||
# Estimate fees
|
||||
fees = await self.contract.getFee(ccip_message)
|
||||
|
||||
# Send transaction
|
||||
tx = await self.contract.ccipSend(ccip_message, {'value': fees})
|
||||
|
||||
return tx.hash
|
||||
```
|
||||
|
||||
### 3. Wormhole Adapter
|
||||
|
||||
```python
|
||||
class WormholeAdapter(BridgeAdapter):
|
||||
"""Wormhole bridge adapter"""
|
||||
|
||||
def __init__(self, bridge_address: str, chain_id: int):
|
||||
self.bridge = bridge_address
|
||||
self.chain_id = chain_id
|
||||
self.contract = self._load_contract()
|
||||
|
||||
async def send_message(self, message: SettlementMessage) -> str:
|
||||
"""Send via Wormhole"""
|
||||
# Encode payload
|
||||
payload = self._encode_payload(message)
|
||||
|
||||
# Send transaction
|
||||
tx = await self.contract.publishMessage(
|
||||
message.nonce,
|
||||
payload,
|
||||
message.payment_amount
|
||||
)
|
||||
|
||||
return tx.hash
|
||||
```
|
||||
|
||||
## Integration with Coordinator
|
||||
|
||||
### Settlement Hook in Coordinator
|
||||
|
||||
```python
|
||||
class SettlementHook:
|
||||
"""Settlement hook for coordinator"""
|
||||
|
||||
def __init__(self, bridge_manager: BridgeManager):
|
||||
self.bridge_manager = bridge_manager
|
||||
|
||||
async def on_job_completed(self, job: Job) -> None:
|
||||
"""Called when job completes"""
|
||||
# Check if cross-chain settlement needed
|
||||
if job.requires_cross_chain_settlement:
|
||||
await self._settle_cross_chain(job)
|
||||
|
||||
async def _settle_cross_chain(self, job: Job) -> None:
|
||||
"""Settle job across chains"""
|
||||
# Create settlement message
|
||||
message = SettlementMessage(
|
||||
source_chain_id=await self._get_chain_id(),
|
||||
target_chain_id=job.target_chain,
|
||||
job_id=job.id,
|
||||
receipt_hash=job.receipt.hash,
|
||||
proof_data=job.receipt.proof,
|
||||
payment_amount=job.payment_amount,
|
||||
payment_token=job.payment_token,
|
||||
nonce=await self._get_nonce(),
|
||||
signature=await self._sign_message(job)
|
||||
)
|
||||
|
||||
# Send via appropriate bridge
|
||||
await self.bridge_manager.settle_cross_chain(
|
||||
message,
|
||||
bridge_name=job.preferred_bridge
|
||||
)
|
||||
```
|
||||
|
||||
### Coordinator API Endpoints
|
||||
|
||||
```python
|
||||
@app.post("/v1/settlement/cross-chain")
|
||||
async def initiate_cross_chain_settlement(
|
||||
request: CrossChainSettlementRequest
|
||||
):
|
||||
"""Initiate cross-chain settlement"""
|
||||
job = await get_job(request.job_id)
|
||||
|
||||
if not job.completed:
|
||||
raise HTTPException(400, "Job not completed")
|
||||
|
||||
# Create settlement message
|
||||
message = SettlementMessage(
|
||||
source_chain_id=request.source_chain,
|
||||
target_chain_id=request.target_chain,
|
||||
job_id=job.id,
|
||||
receipt_hash=job.receipt.hash,
|
||||
proof_data=job.receipt.proof,
|
||||
payment_amount=request.amount,
|
||||
payment_token=request.token,
|
||||
nonce=await generate_nonce(),
|
||||
signature=await sign_settlement(job, request)
|
||||
)
|
||||
|
||||
# Send settlement
|
||||
message_id = await settlement_hook.settle_cross_chain(message)
|
||||
|
||||
return {"message_id": message_id, "status": "pending"}
|
||||
|
||||
@app.get("/v1/settlement/{message_id}/status")
|
||||
async def get_settlement_status(message_id: str):
|
||||
"""Get settlement status"""
|
||||
status = await bridge_manager.get_settlement_status(message_id)
|
||||
return status
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
### Bridge Configuration
|
||||
|
||||
```yaml
|
||||
bridges:
|
||||
layerzero:
|
||||
enabled: true
|
||||
endpoint_address: "0x..."
|
||||
supported_chains: [1, 137, 56, 42161]
|
||||
default_fee: "0.001"
|
||||
|
||||
chainlink_ccip:
|
||||
enabled: true
|
||||
router_address: "0x..."
|
||||
supported_chains: [1, 137, 56, 42161]
|
||||
default_fee: "0.002"
|
||||
|
||||
wormhole:
|
||||
enabled: false
|
||||
bridge_address: "0x..."
|
||||
supported_chains: [1, 137, 56]
|
||||
default_fee: "0.0015"
|
||||
|
||||
settlement:
|
||||
default_bridge: "layerzero"
|
||||
max_retries: 3
|
||||
retry_delay: 30
|
||||
timeout: 3600
|
||||
```
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### Message Validation
|
||||
- Verify signatures on all settlement messages
|
||||
- Validate chain IDs and addresses
|
||||
- Check message size limits
|
||||
- Prevent replay attacks with nonces
|
||||
|
||||
### Bridge Security
|
||||
- Use reputable audited bridge contracts
|
||||
- Implement bridge-specific security checks
|
||||
- Monitor for bridge vulnerabilities
|
||||
- Have fallback mechanisms
|
||||
|
||||
### Economic Security
|
||||
- Validate payment amounts
|
||||
- Check token allowances
|
||||
- Implement fee limits
|
||||
- Monitor for economic attacks
|
||||
|
||||
## Monitoring
|
||||
|
||||
### Metrics to Track
|
||||
- Settlement success rate per bridge
|
||||
- Average settlement time
|
||||
- Cost per settlement
|
||||
- Failed settlement reasons
|
||||
- Bridge health status
|
||||
|
||||
### Alerts
|
||||
- Settlement failures
|
||||
- High settlement costs
|
||||
- Bridge downtime
|
||||
- Unusual settlement patterns
|
||||
|
||||
## Testing
|
||||
|
||||
### Test Scenarios
|
||||
1. **Happy Path**: Successful settlement across chains
|
||||
2. **Bridge Failure**: Handle bridge unavailability
|
||||
3. **Message Too Large**: Handle size limits
|
||||
4. **Insufficient Funds**: Handle payment failures
|
||||
5. **Replay Attack**: Prevent duplicate settlements
|
||||
|
||||
### Test Networks
|
||||
- Ethereum Sepolia
|
||||
- Polygon Mumbai
|
||||
- BSC Testnet
|
||||
- Arbitrum Goerli
|
||||
|
||||
## Migration Path
|
||||
|
||||
### Phase 1: Single Bridge
|
||||
- Implement LayerZero adapter
|
||||
- Basic settlement functionality
|
||||
- Test on testnets
|
||||
|
||||
### Phase 2: Multiple Bridges
|
||||
- Add Chainlink CCIP
|
||||
- Implement bridge selection logic
|
||||
- Add cost optimization
|
||||
|
||||
### Phase 3: Advanced Features
|
||||
- Add Wormhole support
|
||||
- Implement atomic settlements
|
||||
- Add settlement routing
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
1. **Atomic Settlements**: Ensure all-or-nothing settlements
|
||||
2. **Settlement Routing**: Automatically select optimal bridge
|
||||
3. **Batch Settlements**: Settle multiple jobs together
|
||||
4. **Cross-Chain Governance**: Governance across chains
|
||||
5. **Privacy Features**: Confidential settlements
|
||||
|
||||
---
|
||||
|
||||
*Document Version: 1.0*
|
||||
*Last Updated: 2025-01-10*
|
||||
*Owner: Core Protocol Team*
|
||||
@@ -1,618 +0,0 @@
|
||||
# Python SDK Transport Abstraction Design
|
||||
|
||||
## Overview
|
||||
|
||||
This document outlines the design for a pluggable transport abstraction layer in the AITBC Python SDK, enabling support for multiple networks and cross-chain operations.
|
||||
|
||||
## Architecture
|
||||
|
||||
### Current SDK Structure
|
||||
```
|
||||
AITBCClient
|
||||
├── Jobs API
|
||||
├── Marketplace API
|
||||
├── Wallet API
|
||||
├── Receipts API
|
||||
└── Direct HTTP calls to coordinator
|
||||
```
|
||||
|
||||
### Proposed Transport-Based Structure
|
||||
```
|
||||
AITBCClient
|
||||
├── Transport Layer (Pluggable)
|
||||
│ ├── HTTPTransport
|
||||
│ ├── WebSocketTransport
|
||||
│ └── CrossChainTransport
|
||||
├── Jobs API
|
||||
├── Marketplace API
|
||||
├── Wallet API
|
||||
├── Receipts API
|
||||
└── Settlement API (New)
|
||||
```
|
||||
|
||||
## Transport Interface
|
||||
|
||||
### Base Transport Class
|
||||
|
||||
```python
|
||||
from abc import ABC, abstractmethod
|
||||
from typing import Dict, Any, Optional, Union
|
||||
import asyncio
|
||||
|
||||
class Transport(ABC):
|
||||
"""Abstract base class for all transports"""
|
||||
|
||||
def __init__(self, config: Dict[str, Any]):
|
||||
self.config = config
|
||||
self._connected = False
|
||||
|
||||
@abstractmethod
|
||||
async def connect(self) -> None:
|
||||
"""Establish connection"""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
async def disconnect(self) -> None:
|
||||
"""Close connection"""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
async def request(
|
||||
self,
|
||||
method: str,
|
||||
path: str,
|
||||
data: Optional[Dict[str, Any]] = None,
|
||||
params: Optional[Dict[str, Any]] = None,
|
||||
headers: Optional[Dict[str, str]] = None
|
||||
) -> Dict[str, Any]:
|
||||
"""Make a request"""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
async def stream(
|
||||
self,
|
||||
method: str,
|
||||
path: str,
|
||||
data: Optional[Dict[str, Any]] = None
|
||||
) -> AsyncIterator[Dict[str, Any]]:
|
||||
"""Stream responses"""
|
||||
pass
|
||||
|
||||
@property
|
||||
def is_connected(self) -> bool:
|
||||
"""Check if transport is connected"""
|
||||
return self._connected
|
||||
|
||||
@property
|
||||
def chain_id(self) -> Optional[int]:
|
||||
"""Get the chain ID this transport is connected to"""
|
||||
return self.config.get('chain_id')
|
||||
```
|
||||
|
||||
### HTTP Transport Implementation
|
||||
|
||||
```python
|
||||
import aiohttp
|
||||
from typing import AsyncIterator
|
||||
|
||||
class HTTPTransport(Transport):
|
||||
"""HTTP transport for REST API calls"""
|
||||
|
||||
def __init__(self, config: Dict[str, Any]):
|
||||
super().__init__(config)
|
||||
self.base_url = config['base_url']
|
||||
self.session: Optional[aiohttp.ClientSession] = None
|
||||
self.timeout = config.get('timeout', 30)
|
||||
|
||||
async def connect(self) -> None:
|
||||
"""Create HTTP session"""
|
||||
connector = aiohttp.TCPConnector(
|
||||
limit=100,
|
||||
limit_per_host=30,
|
||||
ttl_dns_cache=300,
|
||||
use_dns_cache=True,
|
||||
)
|
||||
|
||||
timeout = aiohttp.ClientTimeout(total=self.timeout)
|
||||
self.session = aiohttp.ClientSession(
|
||||
connector=connector,
|
||||
timeout=timeout,
|
||||
headers=self.config.get('default_headers', {})
|
||||
)
|
||||
self._connected = True
|
||||
|
||||
async def disconnect(self) -> None:
|
||||
"""Close HTTP session"""
|
||||
if self.session:
|
||||
await self.session.close()
|
||||
self.session = None
|
||||
self._connected = False
|
||||
|
||||
async def request(
|
||||
self,
|
||||
method: str,
|
||||
path: str,
|
||||
data: Optional[Dict[str, Any]] = None,
|
||||
params: Optional[Dict[str, Any]] = None,
|
||||
headers: Optional[Dict[str, str]] = None
|
||||
) -> Dict[str, Any]:
|
||||
"""Make HTTP request"""
|
||||
if not self.session:
|
||||
await self.connect()
|
||||
|
||||
url = f"{self.base_url}{path}"
|
||||
|
||||
async with self.session.request(
|
||||
method=method,
|
||||
url=url,
|
||||
json=data,
|
||||
params=params,
|
||||
headers=headers
|
||||
) as response:
|
||||
if response.status >= 400:
|
||||
error_data = await response.json()
|
||||
raise APIError(error_data.get('error', 'Unknown error'))
|
||||
|
||||
return await response.json()
|
||||
|
||||
async def stream(
|
||||
self,
|
||||
method: str,
|
||||
path: str,
|
||||
data: Optional[Dict[str, Any]] = None
|
||||
) -> AsyncIterator[Dict[str, Any]]:
|
||||
"""Stream HTTP responses (not supported for basic HTTP)"""
|
||||
raise NotImplementedError("HTTP transport does not support streaming")
|
||||
```
|
||||
|
||||
### WebSocket Transport Implementation
|
||||
|
||||
```python
|
||||
import websockets
|
||||
import json
|
||||
from typing import AsyncIterator
|
||||
|
||||
class WebSocketTransport(Transport):
|
||||
"""WebSocket transport for real-time updates"""
|
||||
|
||||
def __init__(self, config: Dict[str, Any]):
|
||||
super().__init__(config)
|
||||
self.ws_url = config['ws_url']
|
||||
self.websocket: Optional[websockets.WebSocketServerProtocol] = None
|
||||
self._subscriptions: Dict[str, Any] = {}
|
||||
|
||||
async def connect(self) -> None:
|
||||
"""Connect to WebSocket"""
|
||||
self.websocket = await websockets.connect(
|
||||
self.ws_url,
|
||||
extra_headers=self.config.get('headers', {})
|
||||
)
|
||||
self._connected = True
|
||||
|
||||
# Start message handler
|
||||
asyncio.create_task(self._handle_messages())
|
||||
|
||||
async def disconnect(self) -> None:
|
||||
"""Disconnect WebSocket"""
|
||||
if self.websocket:
|
||||
await self.websocket.close()
|
||||
self.websocket = None
|
||||
self._connected = False
|
||||
|
||||
async def request(
|
||||
self,
|
||||
method: str,
|
||||
path: str,
|
||||
data: Optional[Dict[str, Any]] = None,
|
||||
params: Optional[Dict[str, Any]] = None,
|
||||
headers: Optional[Dict[str, str]] = None
|
||||
) -> Dict[str, Any]:
|
||||
"""Send request via WebSocket"""
|
||||
if not self.websocket:
|
||||
await self.connect()
|
||||
|
||||
message = {
|
||||
'id': self._generate_id(),
|
||||
'method': method,
|
||||
'path': path,
|
||||
'data': data,
|
||||
'params': params
|
||||
}
|
||||
|
||||
await self.websocket.send(json.dumps(message))
|
||||
response = await self.websocket.recv()
|
||||
return json.loads(response)
|
||||
|
||||
async def stream(
|
||||
self,
|
||||
method: str,
|
||||
path: str,
|
||||
data: Optional[Dict[str, Any]] = None
|
||||
) -> AsyncIterator[Dict[str, Any]]:
|
||||
"""Stream responses from WebSocket"""
|
||||
if not self.websocket:
|
||||
await self.connect()
|
||||
|
||||
# Subscribe to stream
|
||||
subscription_id = self._generate_id()
|
||||
message = {
|
||||
'id': subscription_id,
|
||||
'method': 'subscribe',
|
||||
'path': path,
|
||||
'data': data
|
||||
}
|
||||
|
||||
await self.websocket.send(json.dumps(message))
|
||||
|
||||
# Yield messages as they come
|
||||
async for message in self.websocket:
|
||||
data = json.loads(message)
|
||||
if data.get('subscription_id') == subscription_id:
|
||||
yield data
|
||||
|
||||
async def _handle_messages(self):
|
||||
"""Handle incoming WebSocket messages"""
|
||||
async for message in self.websocket:
|
||||
data = json.loads(message)
|
||||
# Handle subscriptions and other messages
|
||||
pass
|
||||
```
|
||||
|
||||
### Cross-Chain Transport Implementation
|
||||
|
||||
```python
|
||||
from ..settlement.manager import BridgeManager
|
||||
from ..settlement.bridges.base import SettlementMessage, SettlementResult
|
||||
|
||||
class CrossChainTransport(Transport):
|
||||
"""Transport for cross-chain settlements"""
|
||||
|
||||
def __init__(self, config: Dict[str, Any]):
|
||||
super().__init__(config)
|
||||
self.bridge_manager = BridgeManager(config.get('storage'))
|
||||
self.base_transport = config.get('base_transport')
|
||||
|
||||
async def connect(self) -> None:
|
||||
"""Initialize bridge manager"""
|
||||
await self.bridge_manager.initialize(config.get('bridges', {}))
|
||||
if self.base_transport:
|
||||
await self.base_transport.connect()
|
||||
self._connected = True
|
||||
|
||||
async def disconnect(self) -> None:
|
||||
"""Disconnect all bridges"""
|
||||
if self.base_transport:
|
||||
await self.base_transport.disconnect()
|
||||
self._connected = False
|
||||
|
||||
async def request(
|
||||
self,
|
||||
method: str,
|
||||
path: str,
|
||||
data: Optional[Dict[str, Any]] = None,
|
||||
params: Optional[Dict[str, Any]] = None,
|
||||
headers: Optional[Dict[str, str]] = None
|
||||
) -> Dict[str, Any]:
|
||||
"""Handle cross-chain requests"""
|
||||
if path.startswith('/settlement/'):
|
||||
return await self._handle_settlement_request(method, path, data)
|
||||
|
||||
# Forward to base transport for other requests
|
||||
if self.base_transport:
|
||||
return await self.base_transport.request(
|
||||
method, path, data, params, headers
|
||||
)
|
||||
|
||||
raise NotImplementedError(f"Path {path} not supported")
|
||||
|
||||
async def settle_cross_chain(
|
||||
self,
|
||||
message: SettlementMessage,
|
||||
bridge_name: Optional[str] = None
|
||||
) -> SettlementResult:
|
||||
"""Settle message across chains"""
|
||||
return await self.bridge_manager.settle_cross_chain(
|
||||
message, bridge_name
|
||||
)
|
||||
|
||||
async def estimate_settlement_cost(
|
||||
self,
|
||||
message: SettlementMessage,
|
||||
bridge_name: Optional[str] = None
|
||||
) -> Dict[str, Any]:
|
||||
"""Estimate settlement cost"""
|
||||
return await self.bridge_manager.estimate_settlement_cost(
|
||||
message, bridge_name
|
||||
)
|
||||
|
||||
async def _handle_settlement_request(
|
||||
self,
|
||||
method: str,
|
||||
path: str,
|
||||
data: Optional[Dict[str, Any]]
|
||||
) -> Dict[str, Any]:
|
||||
"""Handle settlement-specific requests"""
|
||||
if method == 'POST' and path == '/settlement/cross-chain':
|
||||
message = SettlementMessage(**data)
|
||||
result = await self.settle_cross_chain(message)
|
||||
return {
|
||||
'message_id': result.message_id,
|
||||
'status': result.status.value,
|
||||
'transaction_hash': result.transaction_hash
|
||||
}
|
||||
|
||||
elif method == 'GET' and path.startswith('/settlement/'):
|
||||
message_id = path.split('/')[-1]
|
||||
result = await self.bridge_manager.get_settlement_status(message_id)
|
||||
return {
|
||||
'message_id': message_id,
|
||||
'status': result.status.value,
|
||||
'error_message': result.error_message
|
||||
}
|
||||
|
||||
else:
|
||||
raise ValueError(f"Unsupported settlement request: {method} {path}")
|
||||
```
|
||||
|
||||
## Multi-Network Client
|
||||
|
||||
### Network Configuration
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class NetworkConfig:
|
||||
"""Configuration for a network"""
|
||||
name: str
|
||||
chain_id: int
|
||||
transport: Transport
|
||||
is_default: bool = False
|
||||
bridges: List[str] = None
|
||||
|
||||
class MultiNetworkClient:
|
||||
"""Client supporting multiple networks"""
|
||||
|
||||
def __init__(self):
|
||||
self.networks: Dict[int, NetworkConfig] = {}
|
||||
self.default_network: Optional[int] = None
|
||||
|
||||
def add_network(self, config: NetworkConfig) -> None:
|
||||
"""Add a network configuration"""
|
||||
self.networks[config.chain_id] = config
|
||||
if config.is_default or self.default_network is None:
|
||||
self.default_network = config.chain_id
|
||||
|
||||
def get_transport(self, chain_id: Optional[int] = None) -> Transport:
|
||||
"""Get transport for a network"""
|
||||
network_id = chain_id or self.default_network
|
||||
if network_id not in self.networks:
|
||||
raise ValueError(f"Network {network_id} not configured")
|
||||
|
||||
return self.networks[network_id].transport
|
||||
|
||||
async def connect_all(self) -> None:
|
||||
"""Connect to all configured networks"""
|
||||
for config in self.networks.values():
|
||||
await config.transport.connect()
|
||||
|
||||
async def disconnect_all(self) -> None:
|
||||
"""Disconnect from all networks"""
|
||||
for config in self.networks.values():
|
||||
await config.transport.disconnect()
|
||||
```
|
||||
|
||||
## Updated SDK Client
|
||||
|
||||
### New Client Implementation
|
||||
|
||||
```python
|
||||
class AITBCClient:
|
||||
"""AITBC client with pluggable transports"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
transport: Optional[Union[Transport, Dict[str, Any]]] = None,
|
||||
multi_network: bool = False
|
||||
):
|
||||
if multi_network:
|
||||
self._init_multi_network(transport or {})
|
||||
else:
|
||||
self._init_single_network(transport or {})
|
||||
|
||||
def _init_single_network(self, transport_config: Dict[str, Any]) -> None:
|
||||
"""Initialize single network client"""
|
||||
if isinstance(transport_config, Transport):
|
||||
self.transport = transport_config
|
||||
else:
|
||||
# Default to HTTP transport
|
||||
self.transport = HTTPTransport(transport_config)
|
||||
|
||||
self.multi_network = False
|
||||
self._init_apis()
|
||||
|
||||
def _init_multi_network(self, configs: Dict[str, Any]) -> None:
|
||||
"""Initialize multi-network client"""
|
||||
self.multi_network_client = MultiNetworkClient()
|
||||
|
||||
# Configure networks
|
||||
for name, config in configs.get('networks', {}).items():
|
||||
transport = self._create_transport(config)
|
||||
network_config = NetworkConfig(
|
||||
name=name,
|
||||
chain_id=config['chain_id'],
|
||||
transport=transport,
|
||||
is_default=config.get('default', False)
|
||||
)
|
||||
self.multi_network_client.add_network(network_config)
|
||||
|
||||
self.multi_network = True
|
||||
self._init_apis()
|
||||
|
||||
def _create_transport(self, config: Dict[str, Any]) -> Transport:
|
||||
"""Create transport from config"""
|
||||
transport_type = config.get('type', 'http')
|
||||
|
||||
if transport_type == 'http':
|
||||
return HTTPTransport(config)
|
||||
elif transport_type == 'websocket':
|
||||
return WebSocketTransport(config)
|
||||
elif transport_type == 'crosschain':
|
||||
return CrossChainTransport(config)
|
||||
else:
|
||||
raise ValueError(f"Unknown transport type: {transport_type}")
|
||||
|
||||
def _init_apis(self) -> None:
|
||||
"""Initialize API clients"""
|
||||
if self.multi_network:
|
||||
self.jobs = MultiNetworkJobsAPI(self.multi_network_client)
|
||||
self.settlement = MultiNetworkSettlementAPI(self.multi_network_client)
|
||||
else:
|
||||
self.jobs = JobsAPI(self.transport)
|
||||
self.settlement = SettlementAPI(self.transport)
|
||||
|
||||
# Other APIs remain the same but use the transport
|
||||
self.marketplace = MarketplaceAPI(self.transport)
|
||||
self.wallet = WalletAPI(self.transport)
|
||||
self.receipts = ReceiptsAPI(self.transport)
|
||||
|
||||
async def connect(self) -> None:
|
||||
"""Connect to network(s)"""
|
||||
if self.multi_network:
|
||||
await self.multi_network_client.connect_all()
|
||||
else:
|
||||
await self.transport.connect()
|
||||
|
||||
async def disconnect(self) -> None:
|
||||
"""Disconnect from network(s)"""
|
||||
if self.multi_network:
|
||||
await self.multi_network_client.disconnect_all()
|
||||
else:
|
||||
await self.transport.disconnect()
|
||||
```
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Single Network with HTTP Transport
|
||||
|
||||
```python
|
||||
from aitbc import AITBCClient, HTTPTransport
|
||||
|
||||
# Create client with HTTP transport
|
||||
transport = HTTPTransport({
|
||||
'base_url': 'https://aitbc.bubuit.net/api',
|
||||
'timeout': 30,
|
||||
'default_headers': {'X-API-Key': 'your-key'}
|
||||
})
|
||||
|
||||
client = AITBCClient(transport)
|
||||
await client.connect()
|
||||
|
||||
# Use APIs normally
|
||||
job = await client.jobs.create({...})
|
||||
```
|
||||
|
||||
### Multi-Network Configuration
|
||||
|
||||
```python
|
||||
from aitbc import AITBCClient
|
||||
|
||||
config = {
|
||||
'networks': {
|
||||
'ethereum': {
|
||||
'type': 'http',
|
||||
'chain_id': 1,
|
||||
'base_url': 'https://aitbc.bubuit.net/api',
|
||||
'default': True
|
||||
},
|
||||
'polygon': {
|
||||
'type': 'http',
|
||||
'chain_id': 137,
|
||||
'base_url': 'https://polygon-api.aitbc.io'
|
||||
},
|
||||
'arbitrum': {
|
||||
'type': 'crosschain',
|
||||
'chain_id': 42161,
|
||||
'base_transport': HTTPTransport({
|
||||
'base_url': 'https://arbitrum-api.aitbc.io'
|
||||
}),
|
||||
'bridges': {
|
||||
'layerzero': {'enabled': True},
|
||||
'chainlink': {'enabled': True}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
client = AITBCClient(config, multi_network=True)
|
||||
await client.connect()
|
||||
|
||||
# Create job on specific network
|
||||
job = await client.jobs.create({...}, chain_id=137)
|
||||
|
||||
# Settle across chains
|
||||
settlement = await client.settlement.settle_cross_chain(
|
||||
job_id=job['id'],
|
||||
target_chain_id=42161,
|
||||
bridge_name='layerzero'
|
||||
)
|
||||
```
|
||||
|
||||
### Cross-Chain Settlement
|
||||
|
||||
```python
|
||||
# Create job on Ethereum
|
||||
job = await client.jobs.create({
|
||||
'name': 'cross-chain-ai-job',
|
||||
'target_chain': 42161, # Arbitrum
|
||||
'requires_cross_chain_settlement': True
|
||||
})
|
||||
|
||||
# Wait for completion
|
||||
result = await client.jobs.wait_for_completion(job['id'])
|
||||
|
||||
# Settle to Arbitrum
|
||||
settlement = await client.settlement.settle_cross_chain(
|
||||
job_id=job['id'],
|
||||
target_chain_id=42161,
|
||||
bridge_name='layerzero'
|
||||
)
|
||||
|
||||
# Monitor settlement
|
||||
status = await client.settlement.get_status(settlement['message_id'])
|
||||
```
|
||||
|
||||
## Migration Guide
|
||||
|
||||
### From Current SDK
|
||||
|
||||
```python
|
||||
# Old way
|
||||
client = AITBCClient(api_key='key', base_url='url')
|
||||
|
||||
# New way (backward compatible)
|
||||
client = AITBCClient({
|
||||
'base_url': 'url',
|
||||
'default_headers': {'X-API-Key': 'key'}
|
||||
})
|
||||
|
||||
# Or with explicit transport
|
||||
transport = HTTPTransport({
|
||||
'base_url': 'url',
|
||||
'default_headers': {'X-API-Key': 'key'}
|
||||
})
|
||||
client = AITBCClient(transport)
|
||||
```
|
||||
|
||||
## Benefits
|
||||
|
||||
1. **Flexibility**: Easy to add new transport types
|
||||
2. **Multi-Network**: Support for multiple blockchains
|
||||
3. **Cross-Chain**: Built-in support for cross-chain settlements
|
||||
4. **Backward Compatible**: Existing code continues to work
|
||||
5. **Testable**: Easy to mock transports for testing
|
||||
6. **Extensible**: Plugin architecture for custom transports
|
||||
|
||||
---
|
||||
|
||||
*Document Version: 1.0*
|
||||
*Last Updated: 2025-01-10*
|
||||
*Owner: SDK Team*
|
||||
@@ -1,600 +0,0 @@
|
||||
# AITBC – Artificial Intelligence Token Blockchain
|
||||
|
||||
## Overview (Recovered)
|
||||
|
||||
- AITBC couples decentralized blockchain control with asset‑backed value derived from AI computation.
|
||||
- No pre‑mint: tokens are minted by providers **only after** serving compute; prices are set by providers (can be free at bootstrap).
|
||||
|
||||
## Staged Development Roadmap
|
||||
|
||||
### Stage 1: Client–Server Prototype (no blockchain, no hub)
|
||||
|
||||
- Direct client → server API.
|
||||
- API‑key auth; local job logging.
|
||||
- Goal: validate AI service loop and throughput.
|
||||
|
||||
### Stage 2: Blockchain Integration
|
||||
|
||||
- Introduce **AIToken** and minimal smart contracts for minting + accounting.
|
||||
- Mint = amount of compute successfully served; no premint.
|
||||
|
||||
### Stage 3: AI Pool Hub
|
||||
|
||||
- Hub matches requests to multiple servers (sharding/parallelization), verifies outputs, and accounts contributions.
|
||||
- Distributes payments/minted tokens proportionally to work.
|
||||
|
||||
### Stage 4: Marketplace
|
||||
|
||||
- Web DEX/market to buy/sell AITokens; price discovery; reputation and SLAs.
|
||||
|
||||
## System Architecture: Actors
|
||||
|
||||
- **Client** – requests AI jobs (e.g., image/video generation).
|
||||
- **Server/Provider** – runs models (Stable Diffusion, PyTorch, etc.).
|
||||
- **Blockchain Node** – ledger + minting rules.
|
||||
- **AI Pool Hub** – orchestration, metering, payouts.
|
||||
|
||||
## Token Minting Logic (Genesis‑less)
|
||||
|
||||
- No tokens at boot.
|
||||
- Provider advertises price/unit (e.g., 1 AIToken per image or per N GPU‑seconds).
|
||||
- After successful job → provider mints that amount. Free jobs mint 0.
|
||||
|
||||
---
|
||||
|
||||
# Stage 1 – Technischer Implementierungsplan (Detail)
|
||||
|
||||
## Ziele
|
||||
|
||||
- Funktionierender End‑to‑End‑Pfad: Prompt → Inferenz → Ergebnis.
|
||||
- Authentifizierung, Rate‑Limit, Logging.
|
||||
|
||||
## Architektur
|
||||
|
||||
```
|
||||
[ Client ] ⇄ HTTP/JSON ⇄ [ FastAPI AI‑Server (GPU) ]
|
||||
```
|
||||
|
||||
- Server hostet Inferenz-Endpunkte; Client sendet Aufträge.
|
||||
- Optional: WebSocket für Streaming‑Logs/Progress.
|
||||
|
||||
## Technologie‑Stack
|
||||
|
||||
- **Server**: Python 3.10+, FastAPI, Uvicorn, PyTorch, diffusers (Stable Diffusion), PIL.
|
||||
- **Client**: Python CLI (requests / httpx) oder schlankes Web‑UI.
|
||||
- **Persistenz**: SQLite oder JSON Log; Artefakte auf Disk/S3‑ähnlich.
|
||||
- **Sicherheit**: API‑Key (env/secret file), CORS policy, Rate‑Limit (slowapi), timeouts.
|
||||
|
||||
## API‑Spezifikation (v0)
|
||||
|
||||
### POST `/v1/generate-image`
|
||||
|
||||
Request JSON:
|
||||
|
||||
```json
|
||||
{
|
||||
"api_key": "<KEY>",
|
||||
"prompt": "a futuristic city skyline at night",
|
||||
"steps": 30,
|
||||
"guidance": 7.5,
|
||||
"width": 512,
|
||||
"height": 512,
|
||||
"seed": 12345
|
||||
}
|
||||
```
|
||||
|
||||
Response JSON:
|
||||
|
||||
```json
|
||||
{
|
||||
"status": "ok",
|
||||
"job_id": "2025-09-26-000123",
|
||||
"image_base64": "data:image/png;base64,....",
|
||||
"duration_ms": 2180,
|
||||
"gpu_seconds": 1.9
|
||||
}
|
||||
```
|
||||
|
||||
### GET `/v1/health`
|
||||
|
||||
- Rückgabe von `{ "status": "ok", "gpu": "RTX 2060", "model": "SD1.5" }`.
|
||||
|
||||
## Server‑Ablauf (Pseudocode)
|
||||
|
||||
```python
|
||||
@app.post("/v1/generate-image")
|
||||
def gen(req: Request):
|
||||
assert check_api_key(req.api_key)
|
||||
rate_limit(req.key)
|
||||
t0 = now()
|
||||
img = stable_diffusion.generate(prompt=req.prompt, ...)
|
||||
log_job(user=req.key, gpu_seconds=measure_gpu(), ok=True)
|
||||
return {"status":"ok", "image_base64": b64(img), "duration_ms": ms_since(t0)}
|
||||
```
|
||||
|
||||
## Betriebliche Aspekte
|
||||
|
||||
- **Logging**: strukturierte Logs (JSON) inkl. Prompt‑Hash, Laufzeit, GPU‑Sekunden, Exit‑Code.
|
||||
- **Observability**: Prometheus‑/OpenTelemetry‑Metriken (req/sec, p95 Latenz, VRAM‑Nutzung).
|
||||
- **Fehler**: Retry‑Policy (idempotent), Graceful shutdown, Max batch/queue size.
|
||||
- **Sicherheit**: Input‑Sanitization, Upload‑Limits, tmp‑Verzeichnis säubern.
|
||||
|
||||
## Setup‑Schritte (Linux, NVIDIA RTX 2060)
|
||||
|
||||
```bash
|
||||
sudo apt update && sudo apt install -y python3-venv git
|
||||
python3 -m venv venv && source venv/bin/activate
|
||||
pip install fastapi uvicorn[standard] torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
|
||||
pip install diffusers transformers accelerate pillow safetensors xformers slowapi httpx
|
||||
# Start
|
||||
uvicorn app:app --host 127.0.0.2 --port 8000 --workers 1
|
||||
```
|
||||
|
||||
## Akzeptanzkriterien
|
||||
|
||||
- `GET /v1/health` liefert GPU/Model‑Infos.
|
||||
- `POST /v1/generate-image` liefert innerhalb < 5s ein 512×512 PNG (bei RTX 2060, SD‑1.5, \~30 steps).
|
||||
- Logs enthalten pro Job mindestens: job\_id, duration\_ms, gpu\_seconds, bytes\_out.
|
||||
|
||||
## Nächste Schritte zu Stage 2
|
||||
|
||||
- Job‑Quittungsschema definieren (hashbare Receipt für On‑Chain‑Mint später).
|
||||
- Einheit „Compute‑Einheit“ festlegen (z. B. GPU‑Sekunden, Token/Prompt).
|
||||
- Nonce/Signatur im Request zur späteren On‑Chain‑Verifikation.
|
||||
|
||||
---
|
||||
|
||||
## Kurze Stage‑2/3/4‑Vorschau (Implementierungsnotizen)
|
||||
|
||||
- **Stage 2 (Blockchain)**: Smart Contract mit `mint(provider, units, receipt_hash)`, Off‑Chain‑Orakel/Attester.
|
||||
- **Stage 3 (Hub)**: Scheduler (priority, price, reputation), Sharding großer Jobs, Konsistenz‑Checks, Reward‑Split.
|
||||
- **Stage 4 (Marketplace)**: Orderbook, KYC/Compliance Layer (jurisdictions), Custody‑freie Wallet‑Anbindung.
|
||||
|
||||
## Quellen (Auszug)
|
||||
|
||||
- Ethereum Smart Contracts: [https://ethereum.org/en/smart-contracts/](https://ethereum.org/en/smart-contracts/)
|
||||
- PoS Überblick: [https://ethereum.org/en/developers/docs/consensus-mechanisms/pos/](https://ethereum.org/en/developers/docs/consensus-mechanisms/pos/)
|
||||
- PyTorch Deploy: [https://pytorch.org/tutorials/](https://pytorch.org/tutorials/)
|
||||
- FastAPI Docs: [https://fastapi.tiangolo.com/](https://fastapi.tiangolo.com/)
|
||||
|
||||
---
|
||||
|
||||
# Stage 1 – Referenz‑Implementierung (Code)
|
||||
|
||||
## `.env`
|
||||
|
||||
```
|
||||
API_KEY=CHANGE_ME_SUPERSECRET
|
||||
MODEL_ID=runwayml/stable-diffusion-v1-5
|
||||
BIND_HOST=127.0.0.2
|
||||
BIND_PORT=8000
|
||||
```
|
||||
|
||||
## `requirements.txt`
|
||||
|
||||
```
|
||||
fastapi
|
||||
uvicorn[standard]
|
||||
httpx
|
||||
pydantic
|
||||
python-dotenv
|
||||
slowapi
|
||||
pillow
|
||||
torch
|
||||
torchvision
|
||||
torchaudio
|
||||
transformers
|
||||
diffusers
|
||||
accelerate
|
||||
safetensors
|
||||
xformers
|
||||
```
|
||||
|
||||
## `server.py`
|
||||
|
||||
```python
|
||||
import base64, io, os, time, hashlib
|
||||
from functools import lru_cache
|
||||
from typing import Optional
|
||||
from dotenv import load_dotenv
|
||||
from fastapi import FastAPI, HTTPException, Request
|
||||
from pydantic import BaseModel, Field
|
||||
from slowapi import Limiter
|
||||
from slowapi.util import get_remote_address
|
||||
from PIL import Image
|
||||
|
||||
load_dotenv()
|
||||
API_KEY = os.getenv("API_KEY", "CHANGE_ME_SUPERSECRET")
|
||||
MODEL_ID = os.getenv("MODEL_ID", "runwayml/stable-diffusion-v1-5")
|
||||
|
||||
app = FastAPI(title="AITBC Stage1 Server", version="0.1.0")
|
||||
limiter = Limiter(key_func=get_remote_address)
|
||||
|
||||
class GenRequest(BaseModel):
|
||||
api_key: str
|
||||
prompt: str
|
||||
steps: int = Field(30, ge=5, le=100)
|
||||
guidance: float = Field(7.5, ge=0, le=25)
|
||||
width: int = Field(512, ge=256, le=1024)
|
||||
height: int = Field(512, ge=256, le=1024)
|
||||
seed: Optional[int] = None
|
||||
|
||||
@lru_cache(maxsize=1)
|
||||
def load_pipeline():
|
||||
from diffusers import StableDiffusionPipeline
|
||||
import torch
|
||||
pipe = StableDiffusionPipeline.from_pretrained(MODEL_ID, torch_dtype=torch.float16, safety_checker=None)
|
||||
pipe = pipe.to("cuda" if torch.cuda.is_available() else "cpu")
|
||||
pipe.enable_attention_slicing()
|
||||
return pipe
|
||||
|
||||
@app.get("/v1/health")
|
||||
def health():
|
||||
gpu = os.getenv("NVIDIA_VISIBLE_DEVICES", "auto")
|
||||
return {"status": "ok", "gpu": gpu, "model": MODEL_ID}
|
||||
|
||||
@app.post("/v1/generate-image")
|
||||
@limiter.limit("10/minute")
|
||||
def generate(req: GenRequest, request: Request):
|
||||
if req.api_key != API_KEY:
|
||||
raise HTTPException(status_code=401, detail="invalid api_key")
|
||||
t0 = time.time()
|
||||
pipe = load_pipeline()
|
||||
generator = None
|
||||
if req.seed is not None:
|
||||
import torch
|
||||
generator = torch.Generator(device=pipe.device).manual_seed(int(req.seed))
|
||||
result = pipe(req.prompt, num_inference_steps=req.steps, guidance_scale=req.guidance, width=req.width, height=req.height, generator=generator)
|
||||
img: Image.Image = result.images[0]
|
||||
buf = io.BytesIO()
|
||||
img.save(buf, format="PNG")
|
||||
b64 = base64.b64encode(buf.getvalue()).decode("ascii")
|
||||
dur_ms = int((time.time() - t0) * 1000)
|
||||
job_id = hashlib.sha256(f"{t0}-{req.prompt[:64]}".encode()).hexdigest()[:16]
|
||||
log_line = {"job_id": job_id, "duration_ms": dur_ms, "bytes_out": len(b64), "prompt_hash": hashlib.sha256(req.prompt.encode()).hexdigest()}
|
||||
print(log_line, flush=True)
|
||||
return {"status": "ok", "job_id": job_id, "image_base64": f"data:image/png;base64,{b64}", "duration_ms": dur_ms}
|
||||
|
||||
if __name__ == "__main__":
|
||||
import uvicorn, os
|
||||
uvicorn.run("server:app", host=os.getenv("BIND_HOST", "127.0.0.2"), port=int(os.getenv("BIND_PORT", "8000")), reload=False)
|
||||
```
|
||||
|
||||
## `client.py`
|
||||
|
||||
```python
|
||||
import base64, json, os
|
||||
import httpx
|
||||
|
||||
API = os.getenv("API", "http://localhost:8000")
|
||||
API_KEY = os.getenv("API_KEY", "CHANGE_ME_SUPERSECRET")
|
||||
|
||||
payload = {
|
||||
"api_key": API_KEY,
|
||||
"prompt": "a futuristic city skyline at night, ultra detailed, neon",
|
||||
"steps": 30,
|
||||
"guidance": 7.5,
|
||||
"width": 512,
|
||||
"height": 512,
|
||||
}
|
||||
|
||||
r = httpx.post(f"{API}/v1/generate-image", json=payload, timeout=120)
|
||||
r.raise_for_status()
|
||||
resp = r.json()
|
||||
print("job:", resp.get("job_id"), "duration_ms:", resp.get("duration_ms"))
|
||||
img_b64 = resp["image_base64"].split(",",1)[1]
|
||||
open("out.png","wb").write(base64.b64decode(img_b64))
|
||||
print("saved out.png")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
# OpenAPI 3.1 Spezifikation (Stage 1)
|
||||
|
||||
```yaml
|
||||
openapi: 3.1.0
|
||||
info:
|
||||
title: AITBC Stage1 Server
|
||||
version: 0.1.0
|
||||
servers:
|
||||
- url: http://localhost:8000
|
||||
paths:
|
||||
/v1/health:
|
||||
get:
|
||||
summary: Health check
|
||||
responses:
|
||||
'200':
|
||||
description: OK
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
type: object
|
||||
properties:
|
||||
status: { type: string }
|
||||
gpu: { type: string }
|
||||
model: { type: string }
|
||||
/v1/generate-image:
|
||||
post:
|
||||
summary: Generate image from text prompt
|
||||
requestBody:
|
||||
required: true
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
type: object
|
||||
required: [api_key, prompt]
|
||||
properties:
|
||||
api_key: { type: string }
|
||||
prompt: { type: string }
|
||||
steps: { type: integer, minimum: 5, maximum: 100, default: 30 }
|
||||
guidance: { type: number, minimum: 0, maximum: 25, default: 7.5 }
|
||||
width: { type: integer, minimum: 256, maximum: 1024, default: 512 }
|
||||
height: { type: integer, minimum: 256, maximum: 1024, default: 512 }
|
||||
seed: { type: integer, nullable: true }
|
||||
responses:
|
||||
'200':
|
||||
description: Image generated
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
type: object
|
||||
properties:
|
||||
status: { type: string }
|
||||
job_id: { type: string }
|
||||
image_base64: { type: string }
|
||||
duration_ms: { type: integer }
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
# Stage 2 – Receipt‑/Quittungs‑Schema & Hashing
|
||||
|
||||
## JSON Receipt (off‑chain, signierbar)
|
||||
|
||||
```json
|
||||
{
|
||||
"job_id": "2025-09-26-000123",
|
||||
"provider": "0xProviderAddress",
|
||||
"client": "client_public_key_or_id",
|
||||
"units": 1.90,
|
||||
"unit_type": "gpu_seconds",
|
||||
"model": "runwayml/stable-diffusion-v1-5",
|
||||
"prompt_hash": "sha256:...",
|
||||
"started_at": 1695720000,
|
||||
"finished_at": 1695720002,
|
||||
"artifact_sha256": "...",
|
||||
"nonce": "b7f3...",
|
||||
"hub_id": "optional-hub",
|
||||
"chain_id": 11155111
|
||||
}
|
||||
```
|
||||
|
||||
## Hashing
|
||||
|
||||
- Kanonische Serialisierung (minified JSON, Felder in alphabetischer Reihenfolge).
|
||||
- `receipt_hash = keccak256(bytes(serialized))` (für EVM‑Kompatibilität) **oder** `sha256` falls kettenagnostisch.
|
||||
|
||||
## Signatur
|
||||
|
||||
- Signatur über `receipt_hash`:
|
||||
- **secp256k1/ECDSA** (Ethereum‑kompatibel, EIP‑191/EIP‑712) **oder** Ed25519 (falls Off‑Chain‑Attester bevorzugt).
|
||||
- Felder zur Verifikation on‑chain: `provider`, `units`, `receipt_hash`, `signature`.
|
||||
|
||||
## Double‑Mint‑Prevention
|
||||
|
||||
- Smart Contract speichert `used[receipt_hash] = true` nach erfolgreichem Mint.
|
||||
|
||||
---
|
||||
|
||||
# Stage 2 – Smart‑Contract‑Skeleton (Solidity)
|
||||
|
||||
```solidity
|
||||
// SPDX-License-Identifier: MIT
|
||||
pragma solidity ^0.8.24;
|
||||
|
||||
interface IERC20Mint {
|
||||
function mint(address to, uint256 amount) external;
|
||||
}
|
||||
|
||||
contract AITokenMinter {
|
||||
IERC20Mint public token;
|
||||
address public attester; // Off‑chain Hub/Oracle, darf Quittungen bescheinigen
|
||||
mapping(bytes32 => bool) public usedReceipt; // receipt_hash → consumed
|
||||
|
||||
event Minted(address indexed provider, uint256 units, bytes32 receiptHash);
|
||||
event AttesterChanged(address indexed oldA, address indexed newA);
|
||||
|
||||
constructor(address _token, address _attester) {
|
||||
token = IERC20Mint(_token);
|
||||
attester = _attester;
|
||||
}
|
||||
|
||||
function setAttester(address _attester) external /* add access control */ {
|
||||
emit AttesterChanged(attester, _attester);
|
||||
attester = _attester;
|
||||
}
|
||||
|
||||
function mintWithReceipt(
|
||||
address provider,
|
||||
uint256 units,
|
||||
bytes32 receiptHash,
|
||||
bytes calldata attesterSig
|
||||
) external {
|
||||
require(!usedReceipt[receiptHash], "receipt used");
|
||||
// Verify attester signature over EIP‑191 style message: keccak256(abi.encode(provider, units, receiptHash))
|
||||
bytes32 msgHash = keccak256(abi.encode(provider, units, receiptHash));
|
||||
require(_recover(msgHash, attesterSig) == attester, "bad sig");
|
||||
usedReceipt[receiptHash] = true;
|
||||
token.mint(provider, units);
|
||||
emit Minted(provider, units, receiptHash);
|
||||
}
|
||||
|
||||
function _recover(bytes32 msgHash, bytes memory sig) internal pure returns (address) {
|
||||
bytes32 ethHash = keccak256(abi.encodePacked("\x19Ethereum Signed Message:\n32", msgHash));
|
||||
(bytes32 r, bytes32 s, uint8 v) = _split(sig);
|
||||
return ecrecover(ethHash, v, r, s);
|
||||
}
|
||||
|
||||
function _split(bytes memory sig) internal pure returns (bytes32 r, bytes32 s, uint8 v) {
|
||||
require(sig.length == 65, "sig len");
|
||||
assembly {
|
||||
r := mload(add(sig, 32))
|
||||
s := mload(add(sig, 64))
|
||||
v := byte(0, mload(add(sig, 96)))
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
> Hinweis: In Produktion Access‑Control (Ownable/Role‑based), Pausable, Reentrancy‑Guard und EIP‑712‑Typed‑Data einführen.
|
||||
|
||||
---
|
||||
|
||||
# Stage 3 – Hub‑Spezifikation (Kurz)
|
||||
|
||||
- **Scheduler**: Round‑robin + Preis/VRAM‑Filter; optional Reputation.
|
||||
- **Split**: Große Jobs shard‑en; `units` aus Subjobs aggregieren.
|
||||
- **Verification**: Stichprobenhafte Re‑Auswertung / Konsistenz‑Hashes.
|
||||
- **Payout**: Proportionale Verteilung; ein Receipt je Gesamtjob.
|
||||
|
||||
# Stage 4 – Marketplace (Kurz)
|
||||
|
||||
- **Orderbook** (limit/market), **Wallet‑Connect**, Non‑custodial.
|
||||
- **KYC/Compliance** optional je Jurisdiktion.
|
||||
- **Reputation/SLAs** on‑/off‑chain verknüpfbar.
|
||||
|
||||
|
||||
|
||||
---
|
||||
|
||||
# Deployment ohne Docker (Bare‑Metal / VM)
|
||||
|
||||
## Voraussetzungen
|
||||
|
||||
- Ubuntu/Debian mit NVIDIA Treiber (535+) und CUDA/CuDNN passend zur PyTorch‑Version.
|
||||
- Python 3.10+ und `python3-venv`.
|
||||
- Öffentliche Ports: **8000/tcp** (API) – optional Reverse‑Proxy auf 80/443.
|
||||
|
||||
## Treiber & CUDA (Kurz)
|
||||
|
||||
```bash
|
||||
# NVIDIA Treiber (Beispiel Ubuntu)
|
||||
sudo apt update && sudo apt install -y nvidia-driver-535
|
||||
# Nach Reboot: nvidia-smi prüfen
|
||||
# PyTorch bringt eigenes CUDA-Toolkit über Wheels (empfohlen). Kein System-CUDA zwingend nötig.
|
||||
```
|
||||
|
||||
## Benutzer & Verzeichnisstruktur
|
||||
|
||||
```bash
|
||||
sudo useradd -m -r -s /bin/bash aitbc
|
||||
sudo -u aitbc mkdir -p /opt/aitbc/app /opt/aitbc/logs
|
||||
# Code nach /opt/aitbc/app kopieren
|
||||
```
|
||||
|
||||
## Virtualenv & Abhängigkeiten
|
||||
|
||||
```bash
|
||||
sudo -u aitbc bash -lc '
|
||||
cd /opt/aitbc/app && python3 -m venv venv && source venv/bin/activate && \
|
||||
pip install --upgrade pip && pip install -r requirements.txt
|
||||
'
|
||||
```
|
||||
|
||||
## Konfiguration (.env)
|
||||
|
||||
```
|
||||
API_KEY=<GEHEIM>
|
||||
MODEL_ID=runwayml/stable-diffusion-v1-5
|
||||
BIND_HOST=127.0.0.1 # hinter Reverse Proxy
|
||||
BIND_PORT=8000
|
||||
```
|
||||
|
||||
## Systemd‑Unit (Uvicorn)
|
||||
|
||||
`/etc/systemd/system/aitbc.service`
|
||||
|
||||
```ini
|
||||
[Unit]
|
||||
Description=AITBC Stage1 FastAPI Server
|
||||
After=network-online.target
|
||||
Wants=network-online.target
|
||||
|
||||
[Service]
|
||||
User=aitbc
|
||||
Group=aitbc
|
||||
WorkingDirectory=/opt/aitbc/app
|
||||
EnvironmentFile=/opt/aitbc/app/.env
|
||||
ExecStart=/opt/aitbc/app/venv/bin/python -m uvicorn server:app --host ${BIND_HOST} --port ${BIND_PORT} --workers 1
|
||||
Restart=always
|
||||
RestartSec=3
|
||||
# GPU/VRAM limits optional per nvidia-visible-devices
|
||||
StandardOutput=append:/opt/aitbc/logs/stdout.log
|
||||
StandardError=append:/opt/aitbc/logs/stderr.log
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
```
|
||||
|
||||
Aktivieren & Starten:
|
||||
|
||||
```bash
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl enable --now aitbc.service
|
||||
sudo systemctl status aitbc.service
|
||||
```
|
||||
|
||||
## Reverse Proxy (optional, ohne Docker)
|
||||
|
||||
### Nginx (TLS via Certbot)
|
||||
|
||||
```bash
|
||||
sudo apt install -y nginx certbot python3-certbot-nginx
|
||||
sudo tee /etc/nginx/sites-available/aitbc <<'NG'
|
||||
server {
|
||||
listen 80; server_name example.com;
|
||||
location / {
|
||||
proxy_pass http://127.0.0.1:8000;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
}
|
||||
}
|
||||
NG
|
||||
sudo ln -s /etc/nginx/sites-available/aitbc /etc/nginx/sites-enabled/aitbc
|
||||
sudo nginx -t && sudo systemctl reload nginx
|
||||
sudo certbot --nginx -d example.com
|
||||
```
|
||||
|
||||
## Firewall/Netzwerk
|
||||
|
||||
```bash
|
||||
sudo ufw allow OpenSSH
|
||||
sudo ufw allow 80/tcp
|
||||
sudo ufw allow 443/tcp
|
||||
sudo ufw enable
|
||||
```
|
||||
|
||||
## Monitoring ohne Docker
|
||||
|
||||
- **systemd**: `journalctl -u aitbc -f`
|
||||
- **Metriken**: Prometheus Node Exporter, `nvtop`/`nvidia-smi dmon` für GPU.
|
||||
- **Alerts**: systemd `Restart=always`, optional Monit.
|
||||
|
||||
## Zero‑Downtime Update (Rolling ohne Container)
|
||||
|
||||
```bash
|
||||
sudo systemctl stop aitbc
|
||||
sudo -u aitbc bash -lc 'cd /opt/aitbc/app && git pull && source venv/bin/activate && pip install -r requirements.txt'
|
||||
sudo systemctl start aitbc
|
||||
```
|
||||
|
||||
## Härtung & Best Practices
|
||||
|
||||
- Starker API‑Key, IP‑basierte Allow‑List am Reverse‑Proxy.
|
||||
- Rate‑Limit (slowapi) aktivieren; Request‑Body‑Limits setzen (`client_max_body_size`).
|
||||
- Temporäre Dateien regelmäßig bereinigen (systemd tmpfiles).
|
||||
- Separate GPU‑Workstation vs. Edge‑Expose (API hinter Proxy).
|
||||
|
||||
> Hinweis: Diese Anleitung vermeidet bewusst jeglichen Docker‑Einsatz und nutzt **systemd + venv** für einen reproduzierbaren, schlanken Betrieb.
|
||||
|
||||
@@ -1,392 +0,0 @@
|
||||
# blockchain-node/ — Minimal Chain (asset-backed by compute)
|
||||
|
||||
## 0) TL;DR boot path for Windsurf
|
||||
1. Create the service: `apps/blockchain-node` (Python, FastAPI, asyncio, uvicorn).
|
||||
2. Data layer: `sqlite` via `SQLModel` (later: PostgreSQL).
|
||||
3. P2P: WebSocket gossip (lib: `websockets`) with a simple overlay (peer table + heartbeats).
|
||||
4. Consensus (MVP): **PoA single-author** (devnet) → upgrade to **Compute-Backed Proof (CBP)** after coordinator & miner telemetry are wired.
|
||||
5. Block content: **ComputeReceipts** = “proofs of delivered AI work” signed by miners, plus standard transfers.
|
||||
6. Minting: AIToken minted per verified compute unit (e.g., `1 AIT = 1,000 token-ops` — calibrate later).
|
||||
7. REST RPC: `/rpc/*` for clients & coordinator; `/p2p/*` for peers; `/admin/*` for node ops.
|
||||
8. Ship a `devnet` script that starts: 1 bootstrap node, 1 coordinator-api mock, 1 miner mock, 1 client demo.
|
||||
|
||||
---
|
||||
|
||||
## 1) Goal & Scope
|
||||
- Provide a **minimal, testable blockchain node** that issues AITokens **only** when real compute was delivered (asset-backed).
|
||||
- Easy to run, easy to reset, deterministic devnet.
|
||||
- Strong boundaries so **coordinator-api** (job orchestration) and **miner-node** (workers) can integrate quickly.
|
||||
|
||||
Out of scope (MVP):
|
||||
- Smart contracts VM.
|
||||
- Sharding/advanced networking.
|
||||
- Custodial wallets. (Use local keypairs for dev.)
|
||||
|
||||
---
|
||||
|
||||
## 2) Core Concepts
|
||||
|
||||
### 2.1 Actors
|
||||
- **Client**: pays AITokens to request compute jobs.
|
||||
- **Coordinator**: matches jobs ↔ miners; returns signed receipts.
|
||||
- **Miner**: executes jobs; produces **ComputeReceipt** signed with miner key.
|
||||
- **Blockchain Node**: validates receipts, mints AIT for miners, tracks balances, finalizes blocks.
|
||||
|
||||
### 2.2 Asset-Backed Minting
|
||||
- Unit of account: **AIToken (AIT)**.
|
||||
- A miner earns AIT when a **ComputeReceipt** is included in a block.
|
||||
- A receipt is valid iff:
|
||||
1) Its `job_id` exists in coordinator logs,
|
||||
2) `client_payment_tx` covers the quoted price,
|
||||
3) `miner_sig` over `(job_id, hash(output_meta), compute_units, price, nonce)` is valid,
|
||||
4) Not previously claimed (`receipt_id` unique).
|
||||
|
||||
---
|
||||
|
||||
## 3) Minimal Architecture
|
||||
|
||||
```
|
||||
blockchain-node/
|
||||
├─ src/
|
||||
│ ├─ main.py # FastAPI entry
|
||||
│ ├─ p2p.py # WS gossip, peer table, block relay
|
||||
│ ├─ consensus.py # PoA/CBP state machine
|
||||
│ ├─ types.py # dataclasses / pydantic models
|
||||
│ ├─ state.py # DB access (SQLModel), UTXO/Account
|
||||
│ ├─ mempool.py # tx pool (transfers + receipts)
|
||||
│ ├─ crypto.py # ed25519 keys, signatures, hashing
|
||||
│ ├─ receipts.py # receipt validation (with coordinator)
|
||||
│ ├─ blocks.py # block build/verify, difficulty stub
|
||||
│ ├─ rpc.py # REST/RPC routes for clients & ops
|
||||
│ └─ settings.py # env config
|
||||
├─ tests/
|
||||
│ └─ ... # unit & integration tests
|
||||
├─ scripts/
|
||||
│ ├─ devnet_up.sh # run bootstrap node + mocks
|
||||
│ └─ keygen.py # create node/miner/client keys
|
||||
├─ README.md
|
||||
└─ requirements.txt
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4) Data Model (SQLModel)
|
||||
|
||||
### 4.1 Tables
|
||||
- `blocks(id, parent_id, height, timestamp, proposer, tx_count, hash, state_root, sig)`
|
||||
- `tx(id, block_id, type, payload_json, sender, nonce, fee, sig, hash, status)`
|
||||
- `accounts(address, balance, nonce, pubkey)`
|
||||
- `receipts(receipt_id, job_id, client_addr, miner_addr, compute_units, price, output_hash, miner_sig, status)`
|
||||
- `peers(node_id, addr, last_seen, score)`
|
||||
- `params(key, value)` — chain config (mint ratios, fee rate, etc.)
|
||||
|
||||
### 4.2 TX Types
|
||||
- `TRANSFER`: move AIT from A → B
|
||||
- `RECEIPT_CLAIM`: include a **ComputeReceipt**; mints to miner and settles client escrow
|
||||
- `STAKE/UNSTAKE` (later)
|
||||
- `PARAM_UPDATE` (PoA only, gated by admin key for devnet)
|
||||
|
||||
---
|
||||
|
||||
## 5) Block Format (JSON)
|
||||
```json
|
||||
{
|
||||
"parent": "<block_hash>",
|
||||
"height": 123,
|
||||
"timestamp": 1699999999,
|
||||
"proposer": "<node_address>",
|
||||
"txs": ["<tx_hash>", "..."],
|
||||
"stateRoot": "<merkle_root_after_block>",
|
||||
"sig": "<proposer_signature_over_header>"
|
||||
}
|
||||
```
|
||||
|
||||
Header sign bytes = `hash(parent|height|timestamp|proposer|stateRoot)`
|
||||
|
||||
---
|
||||
|
||||
## 6) Consensus
|
||||
|
||||
### 6.1 MVP: PoA (Single Author)
|
||||
- One configured `PROPOSER_KEY` creates blocks at fixed interval (e.g., 2s).
|
||||
- Honest mode only for devnet; finality by canonical longest/height rule.
|
||||
|
||||
### 6.2 Upgrade: **Compute-Backed Proof (CBP)**
|
||||
- Each block’s **work score** = total `compute_units` in included receipts.
|
||||
- Proposer election = weighted round-robin by recent work score and stake (later).
|
||||
- Slashing: submitting invalid receipts reduces score; repeated offenses → temp ban.
|
||||
|
||||
---
|
||||
|
||||
## 7) Receipt Validation (Coordinator Check)
|
||||
|
||||
`receipts.py` performs:
|
||||
1) **Coordinator attestation** (HTTP call to coordinator-api):
|
||||
- `/attest/receipt` with `job_id`, `client`, `miner`, `price`, `compute_units`, `output_hash`.
|
||||
- Returns `{exists: bool, paid: bool, not_double_spent: bool, quote: {...}}`.
|
||||
2) **Signature check**: verify `miner_sig` with miner’s `pubkey`.
|
||||
3) **Economic checks**: ensure `client_payment_tx` exists & covers `price + fee`.
|
||||
|
||||
> For devnet without live coordinator, ship a **mock** that returns deterministic attestation for known `job_id` ranges.
|
||||
|
||||
---
|
||||
|
||||
## 8) Fees & Minting
|
||||
|
||||
- **Fee model (MVP)**: `fee = base_fee + k * payload_size`.
|
||||
- **Minting**:
|
||||
- Miner gets: `mint = compute_units * MINT_PER_UNIT`.
|
||||
- Coordinator gets: `coord_cut = mint * COORDINATOR_RATIO`.
|
||||
- Chain treasury (optional): small %, configurable in `params`.
|
||||
|
||||
---
|
||||
|
||||
## 9) RPC Surface (FastAPI)
|
||||
|
||||
### 9.1 Public
|
||||
- `POST /rpc/sendTx` → `{txHash}`
|
||||
- `GET /rpc/getTx/{txHash}` → `{status, receipt}`
|
||||
- `GET /rpc/getBlock/{heightOrHash}`
|
||||
- `GET /rpc/getHead` → `{height, hash}`
|
||||
- `GET /rpc/getBalance/{address}` → `{balance, nonce}`
|
||||
- `POST /rpc/estimateFee` → `{fee}`
|
||||
|
||||
### 9.2 Coordinator-facing
|
||||
- `POST /rpc/submitReceipt` (alias of `sendTx` with type `RECEIPT_CLAIM`)
|
||||
- `POST /rpc/attest` (devnet mock only)
|
||||
|
||||
### 9.3 Admin (devnet)
|
||||
- `POST /admin/paramSet` (PoA only)
|
||||
- `POST /admin/peers/add` `{addr}`
|
||||
- `POST /admin/mintFaucet` `{address, amount}` (devnet)
|
||||
|
||||
### 9.4 P2P (WS)
|
||||
- `GET /p2p/peers` → list
|
||||
- `WS /p2p/ws` → subscribe to gossip: `{"type":"block"|"tx"|"peer","data":...}`
|
||||
|
||||
---
|
||||
|
||||
## 10) Keys & Crypto
|
||||
- **ed25519** for account & node keys.
|
||||
- Address = `bech32(hrp="ait", sha256(pubkey)[0:20])`.
|
||||
- Sign bytes:
|
||||
- TX: `hash(type|sender|nonce|fee|payload_json_canonical)`
|
||||
- Block: header hash as above.
|
||||
|
||||
Ship `scripts/keygen.py` for dev use.
|
||||
|
||||
---
|
||||
|
||||
## 11) Mempool Rules
|
||||
- Accept if:
|
||||
- `sig` valid,
|
||||
- `nonce == account.nonce + 1`,
|
||||
- `fee >= minFee`,
|
||||
- For `RECEIPT_CLAIM`: passes `receipts.validate()` *optimistically* (soft-accept), then **revalidate** at block time.
|
||||
|
||||
Replacement: higher-fee replaces same `(sender, nonce)`.
|
||||
|
||||
---
|
||||
|
||||
## 12) Node Lifecycle
|
||||
|
||||
**Start:**
|
||||
1) Load config, open DB, ensure genesis.
|
||||
2) Connect to bootstrap peers (if any).
|
||||
3) Start RPC (FastAPI) + P2P WS server.
|
||||
4) Start block proposer (if PoA key present).
|
||||
5) Start peer heartbeats + gossip loops.
|
||||
|
||||
**Shutdown:**
|
||||
- Graceful: flush mempool snapshot, close DB.
|
||||
|
||||
---
|
||||
|
||||
## 13) Genesis
|
||||
- `genesis.json`:
|
||||
- `chain_id`, `timestamp`, `accounts` (faucet), `params` (mint ratios, base fee), `authorities` (PoA keys).
|
||||
|
||||
Provide `scripts/make_genesis.py`.
|
||||
|
||||
---
|
||||
|
||||
## 14) Devnet: End-to-End Demo
|
||||
|
||||
### 14.1 Components
|
||||
- **blockchain-node** (this repo)
|
||||
- **coordinator-api (mock)**: `/attest/receipt` returns valid for `job_id` in `[1..1_000_000]`
|
||||
- **miner-node (mock)**: posts `RECEIPT_CLAIM` for synthetic jobs
|
||||
- **client-web (demo)**: sends `TRANSFER` & displays balances
|
||||
|
||||
### 14.2 Flow
|
||||
1) Client pays `price` to escrow address (coordinator).
|
||||
2) Miner executes job; coordinator verifies output.
|
||||
3) Miner submits **ComputeReceipt** → included in next block.
|
||||
4) Mint AIT to miner; escrow settles; client charged.
|
||||
|
||||
---
|
||||
|
||||
## 15) Testing Strategy
|
||||
|
||||
### 15.1 Unit
|
||||
- `crypto`: keygen, sign/verify, address derivation
|
||||
- `state`: balances, nonce, persistence
|
||||
- `receipts`: signature + coordinator mock
|
||||
- `blocks`: header hash, stateRoot
|
||||
|
||||
### 15.2 Integration
|
||||
- Single node PoA: produce N blocks; submit transfers/receipts; assert balances.
|
||||
- Two nodes P2P: block/tx relay; head convergence.
|
||||
|
||||
### 15.3 Property tests
|
||||
- Nonce monotonicity; no double-spend; unique receipts.
|
||||
|
||||
---
|
||||
|
||||
## 16) Observability
|
||||
- Structured logs (JSON) with `component`, `event`, `height`, `latency_ms`.
|
||||
- `/rpc/metrics` (Prometheus format) — block time, mempool size, peers.
|
||||
|
||||
---
|
||||
|
||||
## 17) Configuration (ENV)
|
||||
- `CHAIN_ID=ait-devnet`
|
||||
- `DB_PATH=./data/chain.db`
|
||||
- `P2P_BIND=127.0.0.2:7070`
|
||||
- `RPC_BIND=127.0.0.2:8080`
|
||||
- `BOOTSTRAP_PEERS=ws://host:7070,...`
|
||||
- `PROPOSER_KEY=...` (optional for non-authors)
|
||||
- `MINT_PER_UNIT=1000`
|
||||
- `COORDINATOR_RATIO=0.05`
|
||||
|
||||
Provide `.env.example`.
|
||||
|
||||
---
|
||||
|
||||
## 18) Minimal API Payloads
|
||||
|
||||
### 18.1 TRANSFER
|
||||
```json
|
||||
{
|
||||
"type": "TRANSFER",
|
||||
"sender": "ait1...",
|
||||
"nonce": 1,
|
||||
"fee": 10,
|
||||
"payload": {"to":"ait1...","amount":12345},
|
||||
"sig": "<ed25519>"
|
||||
}
|
||||
```
|
||||
|
||||
### 18.2 RECEIPT_CLAIM
|
||||
```json
|
||||
{
|
||||
"type": "RECEIPT_CLAIM",
|
||||
"sender": "ait1miner...",
|
||||
"nonce": 7,
|
||||
"fee": 50,
|
||||
"payload": {
|
||||
"receipt_id": "rcpt_7f3a...",
|
||||
"job_id": "job_42",
|
||||
"client_addr": "ait1client...",
|
||||
"miner_addr": "ait1miner...",
|
||||
"compute_units": 2500,
|
||||
"price": 50000,
|
||||
"output_hash": "sha256:abcd...",
|
||||
"miner_sig": "<sig_over_core_fields>"
|
||||
},
|
||||
"sig": "<miner_account_sig>"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 19) Security Notes (MVP)
|
||||
- Devnet PoA means trust in proposer; do **not** expose to internet without firewall.
|
||||
- Enforce coordinator host allowlist for attest calls.
|
||||
- Rate-limit `/rpc/sendTx`.
|
||||
|
||||
---
|
||||
|
||||
## 20) Roadmap
|
||||
1) ✅ PoA devnet with receipts.
|
||||
2) 🔜 CBP proposer selection from rolling work score.
|
||||
3) 🔜 Stake & slashing.
|
||||
4) 🔜 Replace SQLite with PostgreSQL.
|
||||
5) 🔜 Snapshots & fast-sync.
|
||||
6) 🔜 Light client (SPV of receipts & balances).
|
||||
|
||||
---
|
||||
|
||||
## 21) Developer Tasks (Windsurf Order)
|
||||
|
||||
1) **Scaffold** project & `requirements.txt`:
|
||||
- `fastapi`, `uvicorn[standard]`, `sqlmodel`, `pydantic`, `websockets`, `pyyaml`, `python-dotenv`, `ed25519`, `orjson`.
|
||||
|
||||
2) **Implement**:
|
||||
- `crypto.py`, `types.py`, `state.py`.
|
||||
- `rpc.py` (public routes).
|
||||
- `mempool.py`.
|
||||
- `blocks.py` (build/validate).
|
||||
- `consensus.py` (PoA tick).
|
||||
- `p2p.py` (WS server + simple gossip).
|
||||
- `receipts.py` (mock coordinator).
|
||||
|
||||
3) **Wire** `main.py`:
|
||||
- Start RPC, P2P, PoA loops.
|
||||
|
||||
4) **Scripts**:
|
||||
- `scripts/keygen.py`, `scripts/make_genesis.py`, `scripts/devnet_up.sh`.
|
||||
|
||||
5) **Tests**:
|
||||
- Add unit + an integration test that mints on a receipt.
|
||||
|
||||
6) **Docs**:
|
||||
- Update `README.md` with curl examples.
|
||||
|
||||
---
|
||||
|
||||
## 22) Curl Snippets (Dev)
|
||||
|
||||
- Faucet (dev only):
|
||||
```bash
|
||||
curl -sX POST localhost:8080/admin/mintFaucet -H 'content-type: application/json' \
|
||||
-d '{"address":"ait1client...","amount":1000000}'
|
||||
```
|
||||
|
||||
- Transfer:
|
||||
```bash
|
||||
curl -sX POST localhost:8080/rpc/sendTx -H 'content-type: application/json' \
|
||||
-d @transfer.json
|
||||
```
|
||||
|
||||
- Submit Receipt:
|
||||
```bash
|
||||
curl -sX POST localhost:8080/rpc/submitReceipt -H 'content-type: application/json' \
|
||||
-d @receipt_claim.json
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 23) Definition of Done (MVP)
|
||||
- Node produces blocks on PoA.
|
||||
- Can transfer AIT between accounts.
|
||||
- Can submit a valid **ComputeReceipt** → miner balance increases; escrow decreases.
|
||||
- Two nodes converge on same head via P2P.
|
||||
- Basic metrics exposed.
|
||||
|
||||
---
|
||||
|
||||
## 24) Next Files to Create
|
||||
- `src/main.py`
|
||||
- `src/crypto.py`
|
||||
- `src/types.py`
|
||||
- `src/state.py`
|
||||
- `src/mempool.py`
|
||||
- `src/blocks.py`
|
||||
- `src/consensus.py`
|
||||
- `src/p2p.py`
|
||||
- `src/receipts.py`
|
||||
- `src/rpc.py`
|
||||
- `scripts/keygen.py`, `scripts/devnet_up.sh`
|
||||
- `.env.example`, `README.md`, `requirements.txt`
|
||||
|
||||
@@ -1,438 +0,0 @@
|
||||
# coordinator-api.md
|
||||
|
||||
Central API that orchestrates **jobs** from clients to **miners**, tracks lifecycle, validates results, and (later) settles AITokens.
|
||||
**Stage 1 (MVP):** no blockchain, no pool hub — just client ⇄ coordinator ⇄ miner.
|
||||
|
||||
## 1) Goals & Non-Goals
|
||||
|
||||
**Goals (MVP)**
|
||||
- Accept computation jobs from clients.
|
||||
- Match jobs to eligible miners.
|
||||
- Track job state machine (QUEUED → RUNNING → COMPLETED/FAILED/CANCELED/EXPIRED).
|
||||
- Stream results back to clients; store minimal metadata.
|
||||
- Provide a clean, typed API (OpenAPI/Swagger).
|
||||
- Simple auth (API keys) + idempotency + rate limits.
|
||||
- Minimal persistence (SQLite/Postgres) with straightforward SQL (no migrations tooling).
|
||||
|
||||
**Non-Goals (MVP)**
|
||||
- Token minting/settlement (stub hooks only).
|
||||
- Miner marketplace, staking, slashing, reputation (placeholders).
|
||||
- Pool hub coordination (future stage).
|
||||
|
||||
---
|
||||
|
||||
## 2) Tech Stack
|
||||
|
||||
- **Python 3.12**, **FastAPI**, **Uvicorn**
|
||||
- **Pydantic** for schemas
|
||||
- **SQL** via `sqlite3` or Postgres (user can switch later)
|
||||
- **Redis (optional)** for queueing; MVP can start with in-DB FIFO
|
||||
- **HTTP + WebSocket** (for miner heartbeats / job streaming)
|
||||
|
||||
> Debian 12 target. Run under **systemd** later.
|
||||
|
||||
---
|
||||
|
||||
## 3) Directory Layout (WindSurf Workspace)
|
||||
|
||||
```
|
||||
coordinator-api/
|
||||
├─ app/
|
||||
│ ├─ main.py # FastAPI init, lifespan, routers
|
||||
│ ├─ config.py # env parsing
|
||||
│ ├─ deps.py # auth, rate-limit deps
|
||||
│ ├─ db.py # simple DB layer (sqlite/postgres)
|
||||
│ ├─ matching.py # job→miner selection
|
||||
│ ├─ queue.py # enqueue/dequeue logic
|
||||
│ ├─ settlement.py # stubs for token accounting
|
||||
│ ├─ models.py # Pydantic request/response schemas
|
||||
│ ├─ states.py # state machine + transitions
|
||||
│ ├─ routers/
|
||||
│ │ ├─ client.py # /v1/jobs (submit/status/result/cancel)
|
||||
│ │ ├─ miner.py # /v1/miners (register/heartbeat/poll/submit/fail)
|
||||
│ │ └─ admin.py # /v1/admin (stats)
|
||||
│ └─ ws/
|
||||
│ ├─ miner.py # WS for miner heartbeats / job stream (optional)
|
||||
│ └─ client.py # WS for client result stream (optional)
|
||||
├─ tests/
|
||||
│ ├─ test_client_flow.http # REST client flow (HTTP file)
|
||||
│ └─ test_miner_flow.http # REST miner flow
|
||||
├─ .env.example
|
||||
├─ pyproject.toml
|
||||
└─ README.md
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4) Environment (.env)
|
||||
|
||||
```
|
||||
APP_ENV=dev
|
||||
APP_HOST=127.0.0.1
|
||||
APP_PORT=8011
|
||||
DATABASE_URL=sqlite:///./coordinator.db
|
||||
# or: DATABASE_URL=postgresql://user:pass@localhost:5432/aitbc
|
||||
|
||||
# Auth
|
||||
CLIENT_API_KEYS=${CLIENT_API_KEY},client_dev_key_2
|
||||
MINER_API_KEYS=${MINER_API_KEY},miner_dev_key_2
|
||||
ADMIN_API_KEYS=${ADMIN_API_KEY}
|
||||
|
||||
# Security
|
||||
HMAC_SECRET=change_me
|
||||
ALLOW_ORIGINS=*
|
||||
|
||||
# Queue
|
||||
JOB_TTL_SECONDS=900
|
||||
HEARTBEAT_INTERVAL_SECONDS=10
|
||||
HEARTBEAT_TIMEOUT_SECONDS=30
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5) Core Data Model (conceptual)
|
||||
|
||||
**Job**
|
||||
- `job_id` (uuid)
|
||||
- `client_id` (from API key)
|
||||
- `requested_at`, `expires_at`
|
||||
- `payload` (opaque JSON / bytes ref)
|
||||
- `constraints` (gpu, cuda, mem, model, max_price, region)
|
||||
- `state` (QUEUED|RUNNING|COMPLETED|FAILED|CANCELED|EXPIRED)
|
||||
- `assigned_miner_id` (nullable)
|
||||
- `result_ref` (blob path / inline json)
|
||||
- `error` (nullable)
|
||||
- `cost_estimate` (optional)
|
||||
|
||||
**Miner**
|
||||
- `miner_id` (from API key)
|
||||
- `capabilities` (gpu, cuda, vram, models[], region)
|
||||
- `heartbeat_at`
|
||||
- `status` (ONLINE|OFFLINE|DRAINING)
|
||||
- `concurrency` (int), `inflight` (int)
|
||||
|
||||
**WorkerSession**
|
||||
- `session_id`, `miner_id`, `job_id`, `started_at`, `ended_at`, `exit_reason`
|
||||
|
||||
---
|
||||
|
||||
## 6) State Machine
|
||||
|
||||
```
|
||||
QUEUED
|
||||
-> RUNNING (assigned to miner)
|
||||
-> CANCELED (client)
|
||||
-> EXPIRED (ttl)
|
||||
|
||||
RUNNING
|
||||
-> COMPLETED (miner submit_result)
|
||||
-> FAILED (miner fail / timeout)
|
||||
-> CANCELED (client)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 7) Matching (MVP)
|
||||
|
||||
- Filter ONLINE miners by **capabilities** & **region**
|
||||
- Prefer lowest `inflight` (simple load)
|
||||
- Tiebreak by earliest `heartbeat_at` or random
|
||||
- Lock job row → assign → return to miner
|
||||
|
||||
---
|
||||
|
||||
## 8) Auth & Rate Limits
|
||||
|
||||
- **API keys** via `X-Api-Key` header for `client`, `miner`, `admin`.
|
||||
- Optional **HMAC** (`X-Signature`) over body with `HMAC_SECRET`.
|
||||
- **Idempotency**: clients send `Idempotency-Key` on **POST /jobs**.
|
||||
- **Rate limiting**: naive per-key window (e.g., 60 req / 60 s).
|
||||
|
||||
---
|
||||
|
||||
## 9) REST API
|
||||
|
||||
### Client
|
||||
|
||||
- `POST /v1/jobs`
|
||||
- Create job. Returns `job_id`.
|
||||
- `GET /v1/jobs/{job_id}`
|
||||
- Job status & metadata.
|
||||
- `GET /v1/jobs/{job_id}/result`
|
||||
- Result (200 when ready, 425 if not ready).
|
||||
- `POST /v1/jobs/{job_id}/cancel`
|
||||
- Cancel if QUEUED or RUNNING (best effort).
|
||||
|
||||
### Miner
|
||||
|
||||
- `POST /v1/miners/register`
|
||||
- Upsert miner capabilities; set ONLINE.
|
||||
- `POST /v1/miners/heartbeat`
|
||||
- Touch `heartbeat_at`, report `inflight`.
|
||||
- `POST /v1/miners/poll`
|
||||
- Long-poll for next job → returns a job or 204.
|
||||
- `POST /v1/miners/{job_id}/start`
|
||||
- Confirm start (optional if `poll` implies start).
|
||||
- `POST /v1/miners/{job_id}/result`
|
||||
- Submit result; transitions to COMPLETED.
|
||||
- `POST /v1/miners/{job_id}/fail`
|
||||
- Submit failure; transitions to FAILED.
|
||||
- `POST /v1/miners/drain`
|
||||
- Graceful stop accepting new jobs.
|
||||
|
||||
### Admin
|
||||
|
||||
- `GET /v1/admin/stats`
|
||||
- Queue depth, miners online, success rates, avg latency.
|
||||
- `GET /v1/admin/jobs?state=&limit=...`
|
||||
- `GET /v1/admin/miners`
|
||||
|
||||
**Error Shape**
|
||||
```json
|
||||
{ "error": { "code": "STRING_CODE", "message": "human readable", "details": {} } }
|
||||
```
|
||||
|
||||
Common codes: `UNAUTHORIZED_KEY`, `RATE_LIMITED`, `INVALID_PAYLOAD`, `NO_ELIGIBLE_MINER`, `JOB_NOT_FOUND`, `JOB_NOT_READY`, `CONFLICT_STATE`.
|
||||
|
||||
---
|
||||
|
||||
## 10) WebSockets (optional MVP+)
|
||||
|
||||
- `WS /v1/ws/miner?api_key=...`
|
||||
- Server → miner: `job.assigned`
|
||||
- Miner → server: `heartbeat`, `result`, `fail`
|
||||
- `WS /v1/ws/client?job_id=...&api_key=...`
|
||||
- Server → client: `state.changed`, `result.ready`
|
||||
|
||||
Fallback remains HTTP long-polling.
|
||||
|
||||
---
|
||||
|
||||
## 11) Result Storage
|
||||
|
||||
- **Inline JSON** if ≤ 1 MB.
|
||||
- For larger payloads: store to disk path (e.g., `/var/lib/coordinator/results/{job_id}`) and return `result_ref`.
|
||||
|
||||
---
|
||||
|
||||
## 12) Settlement Hooks (stub)
|
||||
|
||||
`settlement.py` exposes:
|
||||
- `record_usage(job, miner)`
|
||||
- `quote_cost(job)`
|
||||
Later wired to **AIToken** mint/transfer when blockchain lands.
|
||||
|
||||
---
|
||||
|
||||
## 13) Minimal FastAPI Skeleton
|
||||
|
||||
```python
|
||||
# app/main.py
|
||||
from fastapi import FastAPI
|
||||
from app.routers import client, miner, admin
|
||||
|
||||
def create_app():
|
||||
app = FastAPI(title="AITBC Coordinator API", version="0.1.0")
|
||||
app.include_router(client.router, prefix="/v1")
|
||||
app.include_router(miner.router, prefix="/v1")
|
||||
app.include_router(admin.router, prefix="/v1")
|
||||
return app
|
||||
|
||||
app = create_app()
|
||||
```
|
||||
|
||||
```python
|
||||
# app/models.py
|
||||
from pydantic import BaseModel, Field
|
||||
from typing import Any, Dict, List, Optional
|
||||
|
||||
class Constraints(BaseModel):
|
||||
gpu: Optional[str] = None
|
||||
cuda: Optional[str] = None
|
||||
min_vram_gb: Optional[int] = None
|
||||
models: Optional[List[str]] = None
|
||||
region: Optional[str] = None
|
||||
max_price: Optional[float] = None
|
||||
|
||||
class JobCreate(BaseModel):
|
||||
payload: Dict[str, Any]
|
||||
constraints: Constraints = Constraints()
|
||||
ttl_seconds: int = 900
|
||||
|
||||
class JobView(BaseModel):
|
||||
job_id: str
|
||||
state: str
|
||||
assigned_miner_id: Optional[str] = None
|
||||
requested_at: str
|
||||
expires_at: str
|
||||
error: Optional[str] = None
|
||||
|
||||
class MinerRegister(BaseModel):
|
||||
capabilities: Dict[str, Any]
|
||||
concurrency: int = 1
|
||||
region: Optional[str] = None
|
||||
|
||||
class PollRequest(BaseModel):
|
||||
max_wait_seconds: int = 15
|
||||
|
||||
class AssignedJob(BaseModel):
|
||||
job_id: str
|
||||
payload: Dict[str, Any]
|
||||
```
|
||||
|
||||
```python
|
||||
# app/routers/client.py
|
||||
from fastapi import APIRouter, Depends, HTTPException
|
||||
from app.models import JobCreate, JobView
|
||||
from app.deps import require_client_key
|
||||
|
||||
router = APIRouter(tags=["client"])
|
||||
|
||||
@router.post("/jobs", response_model=JobView)
|
||||
def submit_job(req: JobCreate, client_id: str = Depends(require_client_key)):
|
||||
# enqueue + return JobView
|
||||
...
|
||||
|
||||
@router.get("/jobs/{job_id}", response_model=JobView)
|
||||
def get_job(job_id: str, client_id: str = Depends(require_client_key)):
|
||||
...
|
||||
```
|
||||
|
||||
```python
|
||||
# app/routers/miner.py
|
||||
from fastapi import APIRouter, Depends
|
||||
from app.models import MinerRegister, PollRequest, AssignedJob
|
||||
from app.deps import require_miner_key
|
||||
|
||||
router = APIRouter(tags=["miner"])
|
||||
|
||||
@router.post("/miners/register")
|
||||
def register(req: MinerRegister, miner_id: str = Depends(require_miner_key)):
|
||||
...
|
||||
|
||||
@router.post("/miners/poll", response_model=AssignedJob, status_code=200)
|
||||
def poll(req: PollRequest, miner_id: str = Depends(require_miner_key)):
|
||||
# try dequeue, else 204
|
||||
...
|
||||
```
|
||||
|
||||
Run:
|
||||
```bash
|
||||
uvicorn app.main:app --host 127.0.0.1 --port 8011 --reload
|
||||
```
|
||||
|
||||
OpenAPI: `http://127.0.0.1:8011/docs`
|
||||
|
||||
---
|
||||
|
||||
## 14) Matching & Queue Pseudocode
|
||||
|
||||
```python
|
||||
def match_next_job(miner):
|
||||
eligible = db.jobs.filter(
|
||||
state="QUEUED",
|
||||
constraints.satisfied_by(miner.capabilities)
|
||||
).order_by("requested_at").first()
|
||||
if not eligible:
|
||||
return None
|
||||
db.txn(lambda:
|
||||
db.jobs.assign(eligible.job_id, miner.id) and
|
||||
db.states.transition(eligible.job_id, "RUNNING")
|
||||
)
|
||||
return eligible
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 15) CURL Examples
|
||||
|
||||
**Client creates a job**
|
||||
```bash
|
||||
curl -sX POST http://127.0.0.1:8011/v1/jobs \
|
||||
-H 'X-Api-Key: ${CLIENT_API_KEY}' \
|
||||
-H 'Idempotency-Key: 7d4a...' \
|
||||
-H 'Content-Type: application/json' \
|
||||
-d '{
|
||||
"payload": {"task":"sum","a":2,"b":3},
|
||||
"constraints": {"gpu": null, "region": "eu-central"}
|
||||
}'
|
||||
```
|
||||
|
||||
**Miner registers + polls**
|
||||
```bash
|
||||
curl -sX POST http://127.0.0.1:8011/v1/miners/register \
|
||||
-H 'X-Api-Key: ${MINER_API_KEY}' \
|
||||
-H 'Content-Type: application/json' \
|
||||
-d '{"capabilities":{"gpu":"RTX4060Ti","cuda":"12.3","vram_gb":16},"concurrency":2,"region":"eu-central"}'
|
||||
|
||||
curl -i -sX POST http://127.0.0.1:8011/v1/miners/poll \
|
||||
-H 'X-Api-Key: ${MINER_API_KEY}' \
|
||||
-H 'Content-Type: application/json' \
|
||||
-d '{"max_wait_seconds":10}'
|
||||
```
|
||||
|
||||
**Miner submits result**
|
||||
```bash
|
||||
curl -sX POST http://127.0.0.1:8011/v1/miners/<JOB_ID>/result \
|
||||
-H 'X-Api-Key: ${MINER_API_KEY}' \
|
||||
-H 'Content-Type: application/json' \
|
||||
-d '{"result":{"sum":5},"metrics":{"latency_ms":42}}'
|
||||
```
|
||||
|
||||
**Client fetches result**
|
||||
```bash
|
||||
curl -s http://127.0.0.1:8011/v1/jobs/<JOB_ID>/result \
|
||||
-H 'X-Api-Key: ${CLIENT_API_KEY}'
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 16) Timeouts & Health
|
||||
|
||||
- **Job TTL**: auto-expire QUEUED after `JOB_TTL_SECONDS`.
|
||||
- **Heartbeat**: miners post every `HEARTBEAT_INTERVAL_SECONDS`.
|
||||
- **Miner OFFLINE** if no heartbeat for `HEARTBEAT_TIMEOUT_SECONDS`.
|
||||
- **Requeue**: RUNNING jobs from OFFLINE miners → back to QUEUED.
|
||||
|
||||
---
|
||||
|
||||
## 17) Security Notes
|
||||
|
||||
- Validate `payload` size & type; enforce max 1 MB inline.
|
||||
- Optional **HMAC** signature for tamper detection.
|
||||
- Sanitize/validate miner-reported capabilities.
|
||||
- Log every state transition (append-only).
|
||||
|
||||
---
|
||||
|
||||
## 18) Admin Metrics (MVP)
|
||||
|
||||
- Queue depth, running count
|
||||
- Miner online/offline, inflight
|
||||
- P50/P95 job latency
|
||||
- Success/fail/cancel rates (windowed)
|
||||
|
||||
---
|
||||
|
||||
## 19) Future Stages
|
||||
|
||||
- **Blockchain layer**: mint on verified compute; tie to `record_usage`.
|
||||
- **Pool hub**: multi-coordinator balancing; marketplace.
|
||||
- **Reputation**: miner scoring, penalty, slashing.
|
||||
- **Bidding**: price discovery; client max price.
|
||||
|
||||
---
|
||||
|
||||
## 20) Checklist (WindSurf)
|
||||
|
||||
1. Create repo structure from section **3**.
|
||||
2. Implement `.env` & `config.py` keys from **4**.
|
||||
3. Add `models.py`, `states.py`, `deps.py` (auth, rate limit).
|
||||
4. Implement DB tables for Job, Miner, WorkerSession.
|
||||
5. Implement `queue.py` and `matching.py`.
|
||||
6. Wire **client** and **miner** routers (MVP endpoints).
|
||||
7. Add admin stats (basic counts).
|
||||
8. Add OpenAPI tags, descriptions.
|
||||
9. Add curl `.http` test files.
|
||||
10. Systemd unit + Nginx proxy (later).
|
||||
|
||||
@@ -1,235 +0,0 @@
|
||||
# examples/ — Minimal runnable examples for integrators
|
||||
|
||||
This folder contains three self-contained, copy-pasteable starters that demonstrate how to talk to the coordinator API, submit jobs, poll status, and (optionally) verify signed receipts.
|
||||
|
||||
```
|
||||
examples/
|
||||
├─ explorer-webexplorer-web/ # (docs live elsewhere; not a runnable example)
|
||||
├─ quickstart-client-python/ # Minimal Python client
|
||||
├─ quickstart-client-js/ # Minimal Node/Browser client
|
||||
└─ receipts-sign-verify/ # Receipt format + sign/verify demos
|
||||
```
|
||||
|
||||
> Conventions: Debian 12/13, zsh, no sudo, run as root if you like. Keep env in a `.env` file. Replace example URLs/tokens with your own.
|
||||
|
||||
---
|
||||
|
||||
## 1) quickstart-client-python/
|
||||
|
||||
### What this shows
|
||||
- Create a job request
|
||||
- Submit to `COORDINATOR_URL`
|
||||
- Poll job status until `succeeded|failed|timeout`
|
||||
- Fetch the result payload
|
||||
- (Optional) Save the receipt JSON for later verification
|
||||
|
||||
### Files Windsurf should ensure exist
|
||||
- `main.py` — the tiny client (≈ 80–120 LOC)
|
||||
- `requirements.txt` — `httpx`, `python-dotenv` (and `pydantic` if you want models)
|
||||
- `.env.example` — `COORDINATOR_URL`, `API_TOKEN`
|
||||
- `README.md` — one-screen run guide
|
||||
|
||||
### How to run
|
||||
```sh
|
||||
cd examples/quickstart-client-python
|
||||
python3 -m venv .venv && source .venv/bin/activate
|
||||
pip install -r requirements.txt
|
||||
|
||||
# Prepare environment
|
||||
cp .env.example .env
|
||||
# edit .env → set COORDINATOR_URL=https://api.local.test:8443, API_TOKEN=xyz
|
||||
|
||||
# Run
|
||||
python main.py --prompt "hello compute" --timeout 60
|
||||
|
||||
# Outputs:
|
||||
# - logs to stdout
|
||||
# - writes ./out/result.json
|
||||
# - writes ./out/receipt.json (if provided by the coordinator)
|
||||
```
|
||||
|
||||
### Coordinator endpoints the code should touch
|
||||
- `POST /v1/jobs` → returns `{ job_id }`
|
||||
- `GET /v1/jobs/{job_id}` → returns `{ status, progress?, result? }`
|
||||
- `GET /v1/jobs/{job_id}/receipt` → returns `{ receipt }` (optional)
|
||||
|
||||
Keep the client resilient: exponential backoff (100ms → 2s), total wall-time cap from `--timeout`.
|
||||
|
||||
---
|
||||
|
||||
## 2) quickstart-client-js/
|
||||
|
||||
### What this shows
|
||||
- Identical flow to the Python quickstart
|
||||
- Two variants: Node (fetch via `undici`) and Browser (native `fetch`)
|
||||
|
||||
### Files Windsurf should ensure exist
|
||||
- `node/`
|
||||
- `package.json` — `undici`, `dotenv`
|
||||
- `index.js` — Node example client
|
||||
- `.env.example`
|
||||
- `README.md`
|
||||
- `browser/`
|
||||
- `index.html` — minimal UI with a Prompt box + “Run” button
|
||||
- `app.js` — client logic (no build step)
|
||||
- `README.md`
|
||||
|
||||
### How to run (Node)
|
||||
```sh
|
||||
cd examples/quickstart-client-js/node
|
||||
npm i
|
||||
cp .env.example .env
|
||||
# edit .env → set COORDINATOR_URL, API_TOKEN
|
||||
node index.js "hello compute"
|
||||
```
|
||||
|
||||
### How to run (Browser)
|
||||
```sh
|
||||
cd examples/quickstart-client-js/browser
|
||||
# Serve statically (choose one)
|
||||
python3 -m http.server 8080
|
||||
# or
|
||||
busybox httpd -f -p 8080
|
||||
```
|
||||
Open `http://localhost:8080` and paste your coordinator URL + token in the form.
|
||||
The app:
|
||||
- `POST /v1/jobs`
|
||||
- polls `GET /v1/jobs/{id}` every 1s (with a 60s guard)
|
||||
- downloads `receipt.json` if available
|
||||
|
||||
---
|
||||
|
||||
## 3) receipts-sign-verify/
|
||||
|
||||
### What this shows
|
||||
- Receipt JSON structure used by AITBC examples
|
||||
- Deterministic signing over a canonicalized JSON (RFC 8785-style or stable key order)
|
||||
- Ed25519 signing & verifying in Python and JS
|
||||
- CLI snippets to verify receipts offline
|
||||
|
||||
> If the project standardizes on another curve, swap the libs accordingly. For Ed25519:
|
||||
> - Python: `pynacl`
|
||||
> - JS: `@noble/ed25519`
|
||||
|
||||
### Files Windsurf should ensure exist
|
||||
- `spec.md` — human-readable schema (see below)
|
||||
- `python/`
|
||||
- `verify.py` — `python verify.py ./samples/receipt.json ./pubkeys/poolhub_ed25519.pub`
|
||||
- `requirements.txt` — `pynacl`
|
||||
- `js/`
|
||||
- `verify.mjs` — `node js/verify.mjs ./samples/receipt.json ./pubkeys/poolhub_ed25519.pub`
|
||||
- `package.json` — `@noble/ed25519`
|
||||
- `samples/receipt.json` — realistic sample
|
||||
- `pubkeys/poolhub_ed25519.pub` — PEM or raw 32-byte hex
|
||||
|
||||
### Minimal receipt schema (for `spec.md`)
|
||||
```jsonc
|
||||
{
|
||||
"version": "1",
|
||||
"job_id": "string",
|
||||
"client_id": "string",
|
||||
"miner_id": "string",
|
||||
"started_at": "2025-09-26T14:00:00Z",
|
||||
"completed_at": "2025-09-26T14:00:07Z",
|
||||
"units_billed": 123, // e.g., “AIToken compute units”
|
||||
"result_hash": "sha256:…", // hex
|
||||
"metadata": { "model": "…" }, // optional, stable ordering for signing
|
||||
"signature": {
|
||||
"alg": "Ed25519",
|
||||
"key_id": "poolhub-ed25519-2025-09",
|
||||
"sig": "base64url…" // signature over canonicalized receipt WITHOUT this signature object
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### CLI usage
|
||||
|
||||
**Python**
|
||||
```sh
|
||||
cd examples/receipts-sign-verify/python
|
||||
python3 -m venv .venv && source .venv/bin/activate
|
||||
pip install -r requirements.txt
|
||||
python verify.py ../samples/receipt.json ../pubkeys/poolhub_ed25519.pub
|
||||
# exit code 0 = valid, non-zero = invalid
|
||||
```
|
||||
|
||||
**Node**
|
||||
```sh
|
||||
cd examples/receipts-sign-verify/js
|
||||
npm i
|
||||
node verify.mjs ../samples/receipt.json ../pubkeys/poolhub_ed25519.pub
|
||||
```
|
||||
|
||||
**Implementation notes for Windsurf**
|
||||
- Canonicalize JSON before hashing/signing (stable key order, UTF-8, no trailing spaces).
|
||||
- Sign bytes of `sha256(canonical_json_without_signature_block)`.
|
||||
- Reject if `completed_at < started_at`, unknown `alg`, or mismatched `result_hash`.
|
||||
|
||||
---
|
||||
|
||||
## Shared environment
|
||||
|
||||
All quickstarts read the following from `.env` or in-page form fields:
|
||||
|
||||
```
|
||||
COORDINATOR_URL=https://api.local.test:8443
|
||||
API_TOKEN=replace-me
|
||||
# Optional: REQUEST_TIMEOUT_SEC=60
|
||||
```
|
||||
|
||||
HTTP headers to include:
|
||||
```
|
||||
Authorization: Bearer <API_TOKEN>
|
||||
Content-Type: application/json
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Windsurf checklist (do this automatically)
|
||||
|
||||
1. **Create folders & files**
|
||||
- `quickstart-client-python/{main.py,requirements.txt,.env.example,README.md}`
|
||||
- `quickstart-client-js/node/{index.js,package.json,.env.example,README.md}`
|
||||
- `quickstart-client-js/browser/{index.html,app.js,README.md}`
|
||||
- `receipts-sign-verify/{spec.md,samples/receipt.json,pubkeys/poolhub_ed25519.pub}`
|
||||
- `receipts-sign-verify/python/{verify.py,requirements.txt}`
|
||||
- `receipts-sign-verify/js/{verify.mjs,package.json}`
|
||||
|
||||
2. **Fill templates**
|
||||
- Implement `POST /v1/jobs`, `GET /v1/jobs/{id}`, `GET /v1/jobs/{id}/receipt` calls.
|
||||
- Poll with backoff; stop at terminal states; write `out/result.json` & `out/receipt.json`.
|
||||
|
||||
3. **Wire Ed25519 libs**
|
||||
- Python: `pynacl` verify(`public_key`, `message`, `signature`)
|
||||
- JS: `@noble/ed25519` verifySync
|
||||
|
||||
4. **Add DX niceties**
|
||||
- `.env.example` everywhere
|
||||
- `README.md` with copy-paste run steps (no global installs)
|
||||
- Minimal logging and clear non-zero exit on failure
|
||||
|
||||
5. **Smoke tests**
|
||||
- Python quickstart runs end-to-end with a mock coordinator (use our tiny FastAPI mock if available).
|
||||
- JS Node client runs with `.env`.
|
||||
- Browser client works via `http://localhost:8080`.
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
- **401 Unauthorized** → check `API_TOKEN`, CORS (browser), or missing `Authorization` header.
|
||||
- **CORS in browser** → coordinator must set:
|
||||
- `Access-Control-Allow-Origin: *` (or your host)
|
||||
- `Access-Control-Allow-Headers: Authorization, Content-Type`
|
||||
- `Access-Control-Allow-Methods: GET, POST, OPTIONS`
|
||||
- **Receipt verify fails** → most often due to non-canonical JSON or wrong public key.
|
||||
|
||||
---
|
||||
|
||||
## License & reuse
|
||||
|
||||
Keep examples MIT-licensed. Add a short header to each file:
|
||||
```
|
||||
MIT © AITBC Examples — This is demo code; use at your own risk.
|
||||
```
|
||||
|
||||
@@ -1,322 +0,0 @@
|
||||
# explorer-web.md
|
||||
Chain Explorer (blocks · tx · receipts)
|
||||
|
||||
## 0) Purpose
|
||||
A lightweight, fast, dark-themed web UI to browse a minimal AITBC blockchain:
|
||||
- Latest blocks & block detail
|
||||
- Transaction list & detail
|
||||
- Address detail (balance, nonce, tx history)
|
||||
- Receipt / logs view
|
||||
- Simple search (block #, hash, tx hash, address)
|
||||
|
||||
MVP reads from **blockchain-node** (HTTP/WS API).
|
||||
No write operations.
|
||||
|
||||
---
|
||||
|
||||
## 1) Tech & Conventions
|
||||
- **Pure frontend** (no backend rendering): static HTML + JS modules + CSS.
|
||||
- **No frameworks** (keep it portable and fast).
|
||||
- **Files split**: HTML, CSS, JS in separate files (user preference).
|
||||
- **ES Modules** with strict typing via JSDoc or TypeScript (optional).
|
||||
- **Dark theme** with orange/ice accents (brand).
|
||||
- **No animations** unless explicitly requested.
|
||||
- **Time format**: UTC ISO + relative (e.g., “2m ago”).
|
||||
|
||||
---
|
||||
|
||||
## 2) Folder Layout (within workspace)
|
||||
```
|
||||
explorer-web/
|
||||
├─ public/
|
||||
│ ├─ index.html # routes: / (latest blocks)
|
||||
│ ├─ block.html # /block?hash=... or /block?number=...
|
||||
│ ├─ tx.html # /tx?hash=...
|
||||
│ ├─ address.html # /address?addr=...
|
||||
│ ├─ receipts.html # /receipts?tx=...
|
||||
│ ├─ 404.html
|
||||
│ ├─ assets/
|
||||
│ │ ├─ logo.svg
|
||||
│ │ └─ icons/*.svg
|
||||
│ ├─ css/
|
||||
│ │ ├─ base.css
|
||||
│ │ ├─ layout.css
|
||||
│ │ └─ theme-dark.css
|
||||
│ └─ js/
|
||||
│ ├─ config.js # API endpoint(s)
|
||||
│ ├─ api.js # fetch helpers
|
||||
│ ├─ store.js # simple state cache
|
||||
│ ├─ utils.js # formatters (hex, time, numbers)
|
||||
│ ├─ components/
|
||||
│ │ ├─ header.js
|
||||
│ │ ├─ footer.js
|
||||
│ │ ├─ searchbox.js
|
||||
│ │ ├─ block-table.js
|
||||
│ │ ├─ tx-table.js
|
||||
│ │ ├─ pager.js
|
||||
│ │ └─ keyvalue.js
|
||||
│ ├─ pages/
|
||||
│ │ ├─ home.js # latest blocks + mempool/heads
|
||||
│ │ ├─ block.js
|
||||
│ │ ├─ tx.js
|
||||
│ │ ├─ address.js
|
||||
│ │ └─ receipts.js
|
||||
│ └─ vendors/ # (empty; we keep it native for now)
|
||||
├─ docs/
|
||||
│ └─ explorer-web.md # this file
|
||||
└─ README.md
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3) API Contracts (read-only)
|
||||
Assume **blockchain-node** exposes:
|
||||
|
||||
### 3.1 REST (HTTP)
|
||||
- `GET /api/chain/head` → `{ number, hash, timestamp }`
|
||||
- `GET /api/blocks?limit=25&before=<blockNumber>` → `[{number,hash,parentHash,timestamp,txCount,miner,size,gasUsed}]`
|
||||
- `GET /api/block/by-number/:n` → `{ ...fullBlock }`
|
||||
- `GET /api/block/by-hash/:h` → `{ ...fullBlock }`
|
||||
- `GET /api/tx/:hash` → `{ hash, from, to, nonce, value, fee, gas, gasPrice, blockHash, blockNumber, timestamp, input }`
|
||||
- `GET /api/address/:addr` → `{ address, balance, nonce, txCount }`
|
||||
- `GET /api/address/:addr/tx?limit=25&before=<blockNumber>` → `[{hash,blockNumber,from,to,value,fee,timestamp}]`
|
||||
- `GET /api/tx/:hash/receipt` → `{ status, gasUsed, logs: [{address, topics:[...], data, index}], cumulativeGasUsed }`
|
||||
- `GET /api/search?q=...`
|
||||
- Accepts block number, block hash, tx hash, or address
|
||||
- Returns a typed result: `{ type: "block"|"tx"|"address" , key: ... }`
|
||||
|
||||
### 3.2 WebSocket (optional, later)
|
||||
- `ws://.../api/stream/heads` → emits new head `{number,hash,timestamp}`
|
||||
- `ws://.../api/stream/mempool` → emits tx previews `{hash, from, to, value, timestamp}`
|
||||
|
||||
> If the node isn’t ready, create a tiny mock server (FastAPI) consistent with these shapes (already planned in other modules).
|
||||
|
||||
---
|
||||
|
||||
## 4) Pages & UX
|
||||
|
||||
### 4.1 Header (every page)
|
||||
- Left: logo + “AITBC Explorer”
|
||||
- Center: search box (accepts block#, block/tx hash, address)
|
||||
- Right: network tag (e.g., “Local Dev”) + head block# (live)
|
||||
|
||||
### 4.2 Home `/`
|
||||
- **Latest Blocks** (table)
|
||||
- columns: `#`, `Hash (short)`, `Tx`, `Miner`, `GasUsed`, `Time`
|
||||
- infinite scroll / “Load older”
|
||||
- (optional) **Mempool feed** (compact list, toggleable)
|
||||
- Empty state: helpful instructions + sample query strings
|
||||
|
||||
### 4.3 Block Detail `/block?...`
|
||||
- Top summary (KeyValue component)
|
||||
- `Number, Hash, Parent, Miner, Timestamp, Size, GasUsed, Difficulty?`
|
||||
- Transactions table (paginated)
|
||||
- “Navigate”: Parent ↖, Next ↗, View in raw JSON (debug)
|
||||
|
||||
### 4.4 Tx Detail `/tx?hash=...`
|
||||
- Summary: `Hash, Status, Block, From, To, Value, Fee, Nonce, Gas(gasPrice)`
|
||||
- Receipt section (logs rendered as topics/data, collapsible)
|
||||
- Input data: hex preview + decode attempt (if ABI registry exists – later)
|
||||
|
||||
### 4.5 Address `/address?addr=...`
|
||||
- Summary: `Address, Balance, Nonce, TxCount`
|
||||
- Transactions list (sent/received filter)
|
||||
- (later) Token balances when chain supports it
|
||||
|
||||
### 4.6 Receipts `/receipts?tx=...`
|
||||
- Focused receipts + logs view with copy buttons
|
||||
|
||||
### 4.7 404
|
||||
- Friendly message + search
|
||||
|
||||
---
|
||||
|
||||
## 5) Components (JS modules)
|
||||
- `header.js` : builds header + binds search submit.
|
||||
- `searchbox.js` : debounced input, detects type (see utils).
|
||||
- `block-table.js` : render rows, short-hash, time-ago.
|
||||
- `tx-table.js` : similar render with direction arrows.
|
||||
- `pager.js` : simple “Load more” with event callback.
|
||||
- `keyvalue.js` : `<dl>` key/value grid for details.
|
||||
- `footer.js` : version, links.
|
||||
|
||||
---
|
||||
|
||||
## 6) Utils
|
||||
- `formatHexShort(hex, bytes=4)` → `0x1234…abcd`
|
||||
- `formatNumber(n)` with thin-space groupings
|
||||
- `formatValueWei(wei)` → AIT units when available (or plain wei)
|
||||
- `timeAgo(ts)` + `formatUTC(ts)`
|
||||
- `parseQuery()` helpers for `?hash=...`
|
||||
- `detectSearchType(q)`:
|
||||
- `0x` + 66 chars → tx/block hash
|
||||
- numeric → block number
|
||||
- `0x` + 42 → address
|
||||
- fallback → “unknown”
|
||||
|
||||
---
|
||||
|
||||
## 7) State (store.js)
|
||||
- `state.head` (number/hash/timestamp)
|
||||
- `state.cache.blocks[number] = block`
|
||||
- `state.cache.txs[hash] = tx`
|
||||
- `state.cache.address[addr] = {balance, nonce, txCount}`
|
||||
- Simple in-memory LRU eviction (optional).
|
||||
|
||||
---
|
||||
|
||||
## 8) Styling
|
||||
- `base.css`: resets, typography, links, buttons, tables.
|
||||
- `layout.css`: header/footer, grid, content widths (max 960px desktop).
|
||||
- `theme-dark.css`: colors:
|
||||
- bg: `#0b0f14`, surface: `#11161c`
|
||||
- text: `#e6eef7`
|
||||
- accent-orange: `#ff8a00`
|
||||
- accent-ice: `#a8d8ff`
|
||||
- Focus states visible. High contrast table rows on hover.
|
||||
|
||||
---
|
||||
|
||||
## 9) Error & Loading UX
|
||||
- Loading spinners (minimal).
|
||||
- Network errors: inline banner with retry.
|
||||
- Empty: clear messages & how to search.
|
||||
|
||||
---
|
||||
|
||||
## 10) Security & Hardening
|
||||
- Treat inputs as untrusted.
|
||||
- Only GETs; block any attempt to POST.
|
||||
- Strict `Content-Security-Policy` sample (for hosting):
|
||||
- `default-src 'self'; img-src 'self' data:; style-src 'self'; script-src 'self'; connect-src 'self' https://blockchain-node.local;`
|
||||
- Avoid third-party CDNs.
|
||||
|
||||
---
|
||||
|
||||
## 11) Test Plan (manual first)
|
||||
1. Home loads head + 25 latest blocks.
|
||||
2. Scroll/pager loads older batches.
|
||||
3. Block search by number + by hash.
|
||||
4. Tx search → detail + receipt.
|
||||
5. Address search → tx list.
|
||||
6. Error states when node is offline.
|
||||
7. Timezones: display UTC consistently.
|
||||
|
||||
---
|
||||
|
||||
## 12) Dev Tasks (Windsurf order of work)
|
||||
1. **Scaffold** folders & empty files.
|
||||
2. Implement `config.js` with `API_BASE`.
|
||||
3. Implement `api.js` (fetch JSON + error handling).
|
||||
4. Build `utils.js` (formatters + search detect).
|
||||
5. Build `header.js` + `footer.js`.
|
||||
6. Home page: blocks list + pager.
|
||||
7. Block detail page.
|
||||
8. Tx detail + receipts.
|
||||
9. Address page with tx list.
|
||||
10. 404 + polish (copy buttons, tiny helpers).
|
||||
11. CSS pass (dark theme).
|
||||
12. Final QA.
|
||||
|
||||
---
|
||||
|
||||
## 13) Mock Data (for offline dev)
|
||||
Place under `public/js/vendors/mock.js` (opt-in):
|
||||
- Export functions that resolve Promises with static JSON fixtures in `public/mock/*.json`.
|
||||
- Toggle via `config.js` flag `USE_MOCK=true`.
|
||||
|
||||
---
|
||||
|
||||
## 14) Minimal HTML Skeleton (example: index.html)
|
||||
```html
|
||||
<!doctype html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="utf-8" />
|
||||
<meta name="viewport" content="width=device-width,initial-scale=1" />
|
||||
<title>AITBC Explorer</title>
|
||||
<link rel="stylesheet" href="./css/base.css" />
|
||||
<link rel="stylesheet" href="./css/layout.css" />
|
||||
<link rel="stylesheet" href="./css/theme-dark.css" />
|
||||
</head>
|
||||
<body>
|
||||
<header id="app-header"></header>
|
||||
<main id="app"></main>
|
||||
<footer id="app-footer"></footer>
|
||||
<script type="module">
|
||||
import { renderHeader } from './js/components/header.js';
|
||||
import { renderFooter } from './js/components/footer.js';
|
||||
import { renderHome } from './js/pages/home.js';
|
||||
renderHeader(document.getElementById('app-header'));
|
||||
renderFooter(document.getElementById('app-footer'));
|
||||
renderHome(document.getElementById('app'));
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 15) config.js (example)
|
||||
```js
|
||||
export const CONFIG = {
|
||||
API_BASE: 'http://localhost:8545', // adapt to blockchain-node
|
||||
USE_MOCK: false,
|
||||
PAGE_SIZE: 25,
|
||||
NETWORK_NAME: 'Local Dev',
|
||||
};
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 16) API Helpers (api.js — sketch)
|
||||
```js
|
||||
import { CONFIG } from './config.js';
|
||||
|
||||
async function jget(path) {
|
||||
const res = await fetch(`${CONFIG.API_BASE}${path}`, { method: 'GET' });
|
||||
if (!res.ok) throw new Error(`HTTP ${res.status}: ${path}`);
|
||||
return res.json();
|
||||
}
|
||||
|
||||
export const api = {
|
||||
head: () => jget('/api/chain/head'),
|
||||
blocks: (limit, before) => jget(`/api/blocks?limit=${limit}&before=${before ?? ''}`),
|
||||
blockByNo: (n) => jget(`/api/block/by-number/${n}`),
|
||||
blockByHash: (h) => jget(`/api/block/by-hash/${h}`),
|
||||
tx: (hash) => jget(`/api/tx/${hash}`),
|
||||
receipt: (hash) => jget(`/api/tx/${hash}/receipt`),
|
||||
address: (addr) => jget(`/api/address/${addr}`),
|
||||
addressTx: (addr, limit, before) => jget(`/api/address/${addr}/tx?limit=${limit}&before=${before ?? ''}`),
|
||||
search: (q) => jget(`/api/search?q=${encodeURIComponent(q)}`),
|
||||
};
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 17) Performance Checklist
|
||||
- Use pagination/infinite scroll (no huge payloads).
|
||||
- Cache recent blocks/tx in-memory (store.js).
|
||||
- Avoid layout thrash: table builds via DocumentFragment.
|
||||
- Defer non-critical fetches (e.g., mempool).
|
||||
- Keep CSS small and critical.
|
||||
|
||||
---
|
||||
|
||||
## 18) Deployment
|
||||
- Serve `public/` via Nginx under `/explorer/` or own domain.
|
||||
- Set correct `connect-src` in CSP to point to blockchain-node.
|
||||
- Ensure CORS on blockchain-node for the explorer origin (read-only).
|
||||
|
||||
---
|
||||
|
||||
## 19) Roadmap (post-MVP)
|
||||
- Live head updates via WS.
|
||||
- Mempool stream view.
|
||||
- ABI registry + input decoding.
|
||||
- Token balances (when chain supports).
|
||||
- Export to CSV / JSON for tables.
|
||||
- Theming switch (dark/light).
|
||||
|
||||
---
|
||||
|
||||
@@ -1,468 +0,0 @@
|
||||
# Layout & Frontend Guidelines (Windsurf)
|
||||
|
||||
Target: **mobile‑first**, dark theme, max content width **960px** on desktop. Reference device: **Nothing Phone 2a**.
|
||||
|
||||
---
|
||||
|
||||
## 1) Design System
|
||||
|
||||
### 1.1 Color (Dark Theme)
|
||||
- `--bg-0: #0b0f14` (page background)
|
||||
- `--bg-1: #11161c` (cards/sections)
|
||||
- `--tx-0: #e6edf3` (primary text)
|
||||
- `--tx-1: #a7b3be` (muted)
|
||||
- `--pri: #ff7a1a` (accent orange)
|
||||
- `--ice: #b9ecff` (ice accent)
|
||||
- `--ok: #3ddc97`
|
||||
- `--warn: #ffcc00`
|
||||
- `--err: #ff4d4d`
|
||||
|
||||
### 1.2 Typography
|
||||
- Base font-size: **16px** (mobile), scale up at desktop.
|
||||
- Font stack: System UI (`-apple-system, Segoe UI, Roboto, Inter, Arial, sans-serif`).
|
||||
- Line-height: 1.5 body, 1.2 headings.
|
||||
|
||||
### 1.3 Spacing (8‑pt grid)
|
||||
- `--s-1: 4px`, `--s-2: 8px`, `--s-3: 12px`, `--s-4: 16px`, `--s-5: 24px`, `--s-6: 32px`, `--s-7: 48px`, `--s-8: 64px`.
|
||||
|
||||
### 1.4 Radius & Shadow
|
||||
- Radius: `--r-1: 8px`, `--r-2: 16px`.
|
||||
- Shadow (subtle): `0 4px 20px rgba(0,0,0,.25)`.
|
||||
|
||||
---
|
||||
|
||||
## 2) Grid & Layout
|
||||
|
||||
### 2.1 Container
|
||||
- **Mobile‑first**: full‑bleed padding.
|
||||
- Desktop container: **max‑width: 960px**, centered.
|
||||
- Side gutters: 16px (mobile), 24px (tablet), 32px (desktop).
|
||||
|
||||
**Breakpoint summary**
|
||||
| Token | Min width | Container behaviour | Notes |
|
||||
| --- | --- | --- | --- |
|
||||
| `--bp-sm` | 360px | Fluid | Single-column layouts prioritise readability.
|
||||
| `--bp-md` | 480px | Fluid | Allow two-up cards or media/text pairings when needed.
|
||||
| `--bp-lg` | 768px | Max-width 90% (capped at 960px) | Stage tablet/landscape experiences before full desktop.
|
||||
| `--bp-xl` | 960px | Fixed 960px max width | Full desktop grid, persistent side rails allowed.
|
||||
|
||||
Always respect `env(safe-area-inset-*)` for notch devices (use helpers like `.safe-b`).
|
||||
|
||||
### 2.2 Columns
|
||||
- 12‑column grid on screens ≥ **960px**.
|
||||
- Column gutter: 16px (mobile), 24px (≥960px).
|
||||
- Utility classes (examples):
|
||||
- `.row { display:grid; grid-template-columns: repeat(12, 1fr); gap: var(--gutter); }`
|
||||
- `.col-12, .col-6, .col-4, .col-3` for common spans on desktop.
|
||||
- Mobile stacks by default; use responsive helpers at breakpoints.
|
||||
|
||||
### 2.3 Breakpoints (Nothing Phone 2a aware)
|
||||
- `--bp-sm: 360px` (small phones)
|
||||
- `--bp-md: 480px` (Nothing 2a width in portrait ~ 412–480 CSS px)
|
||||
- `--bp-lg: 768px` (tablets / landscape phones)
|
||||
- `--bp-xl: 960px` (desktop container)
|
||||
|
||||
**Mobile layout rules:**
|
||||
- Navigation collapses to icon buttons with overflow menu at `--bp-sm`.
|
||||
- Multi-column sections stack; keep vertical rhythm using `var(--s-6)`.
|
||||
- Sticky headers take 56px height; ensure content uses `.safe-b` for bottom insets.
|
||||
|
||||
**Desktop enhancements:**
|
||||
- Activate `.row` grid with `.col-*` spans at `--bp-xl`.
|
||||
- Introduce side rail for filters or secondary nav (span 3 or 4 columns).
|
||||
- Increase typographic scale by 1 step (`clamp` already handles this).
|
||||
|
||||
> **Rule:** Build for < `--bp-lg` first; enhance progressively at `--bp-lg` and `--bp-xl`.
|
||||
|
||||
---
|
||||
|
||||
## 3) Page Chrome
|
||||
|
||||
### 3.1 Header
|
||||
- Sticky top, height 56–64px.
|
||||
- Left: brand; Right: primary action or menu.
|
||||
- Translucent on scroll (backdrop‑filter), solid at top.
|
||||
|
||||
### 3.2 Footer
|
||||
- Thin bar; meta links. Uses ice accent for separators.
|
||||
|
||||
### 3.3 Main
|
||||
- Vertical rhythm: sections spaced by `var(--s-7)` (mobile) / `var(--s-8)` (desktop).
|
||||
- Cards: background `var(--bg-1)`, radius `var(--r-2)`, padding `var(--s-6)`.
|
||||
|
||||
---
|
||||
|
||||
## 4) CSS File Strategy (per page)
|
||||
|
||||
**Every HTML page ships its own CSS file**, plus shared layers:
|
||||
|
||||
- `/css/base.css` — resets, variables, typography, utility helpers.
|
||||
- `/css/components.css` — buttons, inputs, cards, modals, toast.
|
||||
- `/css/layout.css` — grid/container/header/footer.
|
||||
- `/css/pages/<page>.css` — page‑specific rules (one per HTML page).
|
||||
|
||||
**Naming** (BEM‑ish): `block__elem--mod`. Avoid nesting >2 levels.
|
||||
|
||||
**Example includes:**
|
||||
```html
|
||||
<link rel="stylesheet" href="/css/base.css">
|
||||
<link rel="stylesheet" href="/css/components.css">
|
||||
<link rel="stylesheet" href="/css/layout.css">
|
||||
<link rel="stylesheet" href="/css/pages/dashboard.css">
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5) Utilities (recommended)
|
||||
- Spacing: `.mt-4`, `.mb-6`, `.px-4`, `.py-6` (map to spacing scale).
|
||||
- Flex/Grid: `.flex`, `.grid`, `.ai-c`, `.jc-b`.
|
||||
- Display: `.hide-sm`, `.hide-lg` using media queries.
|
||||
|
||||
---
|
||||
|
||||
## 6) Toast Messages (center bottom)
|
||||
|
||||
**Position:** centered at bottom above safe‑area insets.
|
||||
|
||||
**Behavior:**
|
||||
- Appear for 3–5s; pause on hover; dismiss on click.
|
||||
- Max width 90% on mobile, 420px desktop.
|
||||
- Elevation + subtle slide/fade.
|
||||
|
||||
**Structure:**
|
||||
```html
|
||||
<div id="toast-root" aria-live="polite" aria-atomic="true"></div>
|
||||
```
|
||||
|
||||
**Styles (concept):**
|
||||
```css
|
||||
#toast-root { position: fixed; left: 50%; bottom: max(16px, env(safe-area-inset-bottom)); transform: translateX(-50%); z-index: 9999; }
|
||||
.toast { background: var(--bg-1); color: var(--tx-0); border: 1px solid rgba(185,236,255,.2); padding: var(--s-5) var(--s-6); border-radius: var(--r-2); box-shadow: 0 10px 30px rgba(0,0,0,.35); margin-bottom: var(--s-4); max-width: min(420px, 90vw); }
|
||||
.toast--ok { border-color: rgba(61,220,151,.35); }
|
||||
.toast--warn { border-color: rgba(255,204,0,.35); }
|
||||
.toast--err { border-color: rgba(255,77,77,.35); }
|
||||
```
|
||||
|
||||
**JS API (minimal):**
|
||||
```js
|
||||
function showToast(msg, type = "ok", ms = 3500) {
|
||||
const root = document.getElementById("toast-root");
|
||||
const el = document.createElement("div");
|
||||
el.className = `toast toast--${type}`;
|
||||
el.role = "status";
|
||||
el.textContent = msg;
|
||||
root.appendChild(el);
|
||||
const t = setTimeout(() => el.remove(), ms);
|
||||
el.addEventListener("mouseenter", () => clearTimeout(t));
|
||||
el.addEventListener("click", () => el.remove());
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 7) Browser Notifications (system tray)
|
||||
|
||||
**When to use:** Only for important, user‑initiated events (e.g., a new match, message, or scheduled session start). Always provide an in‑app alternative (toast/modal) for users who deny permission.
|
||||
|
||||
**Permission flow:**
|
||||
```js
|
||||
async function ensureNotifyPermission() {
|
||||
if (!("Notification" in window)) return false;
|
||||
if (Notification.permission === "granted") return true;
|
||||
if (Notification.permission === "denied") return false;
|
||||
const res = await Notification.requestPermission();
|
||||
return res === "granted";
|
||||
}
|
||||
```
|
||||
|
||||
**Send notification:**
|
||||
```js
|
||||
function notify(opts) {
|
||||
if (Notification.permission !== "granted") return;
|
||||
const n = new Notification(opts.title || "Update", {
|
||||
body: opts.body || "",
|
||||
icon: opts.icon || "/icons/notify.png",
|
||||
tag: opts.tag || "app-event",
|
||||
requireInteraction: !!opts.sticky
|
||||
});
|
||||
if (opts.onclick) n.addEventListener("click", opts.onclick);
|
||||
}
|
||||
```
|
||||
|
||||
**Pattern:**
|
||||
```js
|
||||
const ok = await ensureNotifyPermission();
|
||||
if (ok) notify({ title: "Match window opens soon", body: "Starts in 10 min" });
|
||||
else showToast("Enable notifications in settings to get alerts", "warn");
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 8) Forms & Inputs
|
||||
- Hit target ≥ 44×44px, labels always visible.
|
||||
- Focus ring: `outline: 2px solid var(--ice)`.
|
||||
- Validation: inline text in `--warn/--err`; never only color.
|
||||
|
||||
---
|
||||
|
||||
## 9) Performance
|
||||
- CSS: ship **only** what a page uses (per‑page CSS). Avoid giant bundles.
|
||||
- Images: `loading="lazy"`, responsive sizes; WebP/AVIF first.
|
||||
- Fonts: use system fonts; if custom, `font-display: swap`.
|
||||
|
||||
---
|
||||
|
||||
## 10) Accessibility
|
||||
- Ensure color contrast ratios meet WCAG AA standards (e.g., 4.5:1 for text and 3:1 for large text).
|
||||
- Use semantic HTML elements (<header>, <nav>, <main>, etc.) and ARIA attributes for dynamic content.
|
||||
- Support keyboard navigation with logical tab order and visible focus indicators.
|
||||
- Test for screen readers, providing text alternatives for images and ensuring forms are labeled correctly.
|
||||
- Adapt layouts for assistive technologies using media queries and flexible components.
|
||||
|
||||
## 11) Accessibility Integration
|
||||
- Apply utility classes for focus states (e.g., :focus-visible with visible outline).
|
||||
- Use ARIA roles for custom widgets and ensure all content is perceivable, operable, understandable, and robust.
|
||||
|
||||
## 12) Breakpoint Examples
|
||||
```css
|
||||
/* Mobile-first defaults here */
|
||||
@media (min-width: 768px) { /* tablets / landscape phones */ }
|
||||
@media (min-width: 960px) { /* desktop grid & container */ }
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 13) Checklist (per page)
|
||||
- [ ] Uses `/css/pages/<page>.css` + shared layers
|
||||
- [ ] Container max‑width 960px, centered; gutters follow breakpoint summary
|
||||
- [ ] Mobile‑first; tests on Nothing Phone 2a
|
||||
- [ ] Toasts render center‑bottom
|
||||
- [ ] Notifications gated by permission & mirrored with toasts
|
||||
- [ ] A11y pass (headings order, labels, focus, contrast)
|
||||
|
||||
|
||||
|
||||
---
|
||||
|
||||
## Appendix A — `/css/base.css`
|
||||
```css
|
||||
/* Base: variables, reset, typography, utilities */
|
||||
:root{
|
||||
--bg-0:#0b0f14; --bg-1:#11161c;
|
||||
--tx-0:#e6edf3; --tx-1:#a7b3be;
|
||||
--pri:#ff7a1a; --ice:#b9ecff;
|
||||
--ok:#3ddc97; --warn:#ffcc00; --err:#ff4d4d;
|
||||
|
||||
--s-1:4px; --s-2:8px; --s-3:12px; --s-4:16px;
|
||||
--s-5:24px; --s-6:32px; --s-7:48px; --s-8:64px;
|
||||
--r-1:8px; --r-2:16px;
|
||||
--gutter:16px;
|
||||
}
|
||||
|
||||
@media (min-width:960px){ :root{ --gutter:24px; } }
|
||||
|
||||
/* Reset */
|
||||
*{ box-sizing:border-box; }
|
||||
html,body{ height:100%; }
|
||||
html{ color-scheme:dark; }
|
||||
body{
|
||||
margin:0; background:var(--bg-0); color:var(--tx-0);
|
||||
font:16px/1.5 -apple-system, Segoe UI, Roboto, Inter, Arial, sans-serif;
|
||||
-webkit-font-smoothing:antialiased; -moz-osx-font-smoothing:grayscale;
|
||||
}
|
||||
img,svg,video{ max-width:100%; height:auto; display:block; }
|
||||
button,input,select,textarea{ font:inherit; color:inherit; }
|
||||
:focus-visible{ outline:2px solid var(--ice); outline-offset:2px; }
|
||||
|
||||
/* Typography */
|
||||
h1{ font-size:clamp(24px, 3.5vw, 36px); line-height:1.2; margin:0 0 var(--s-4); }
|
||||
h2{ font-size:clamp(20px, 3vw, 28px); line-height:1.25; margin:0 0 var(--s-3); }
|
||||
h3{ font-size:clamp(18px, 2.5vw, 22px); line-height:1.3; margin:0 0 var(--s-2); }
|
||||
p{ margin:0 0 var(--s-4); color:var(--tx-1); }
|
||||
|
||||
/* Links */
|
||||
a{ color:var(--ice); text-decoration:none; }
|
||||
a:hover{ text-decoration:underline; }
|
||||
|
||||
/* Utilities */
|
||||
.container{ width:100%; max-width:960px; margin:0 auto; padding:0 var(--gutter); }
|
||||
.flex{ display:flex; } .grid{ display:grid; }
|
||||
.ai-c{ align-items:center; } .jc-b{ justify-content:space-between; }
|
||||
.mt-4{ margin-top:var(--s-4);} .mb-6{ margin-bottom:var(--s-6);} .px-4{ padding-left:var(--s-4); padding-right:var(--s-4);} .py-6{ padding-top:var(--s-6); padding-bottom:var(--s-6);}
|
||||
.hide-sm{ display:none; }
|
||||
@media (min-width:960px){ .hide-lg{ display:none; } .hide-sm{ display:initial; } }
|
||||
|
||||
/* Safe area helpers */
|
||||
.safe-b{ padding-bottom:max(var(--s-4), env(safe-area-inset-bottom)); }
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Appendix B — `/css/layout.css`
|
||||
```css
|
||||
/* Grid, header, footer, sections */
|
||||
.row{ display:grid; grid-template-columns:repeat(12, 1fr); gap:var(--gutter); }
|
||||
|
||||
/* Mobile-first: stack columns */
|
||||
[class^="col-"]{ grid-column:1/-1; }
|
||||
|
||||
@media (min-width:960px){
|
||||
.col-12{ grid-column:span 12; }
|
||||
.col-6{ grid-column:span 6; }
|
||||
.col-4{ grid-column:span 4; }
|
||||
.col-3{ grid-column:span 3; }
|
||||
}
|
||||
|
||||
header.site{
|
||||
position:sticky; top:0; z-index:50;
|
||||
backdrop-filter:saturate(1.2) blur(8px);
|
||||
background:color-mix(in oklab, var(--bg-0) 85%, transparent);
|
||||
border-bottom:1px solid rgba(185,236,255,.12);
|
||||
}
|
||||
header.site .inner{ height:64px; }
|
||||
|
||||
footer.site{
|
||||
margin-top:var(--s-8);
|
||||
border-top:1px solid rgba(185,236,255,.12);
|
||||
background:var(--bg-0);
|
||||
}
|
||||
footer.site .inner{ height:56px; font-size:14px; color:var(--tx-1); }
|
||||
|
||||
main section{ margin: var(--s-7) 0; }
|
||||
.card{
|
||||
background:var(--bg-1);
|
||||
border:1px solid rgba(185,236,255,.12);
|
||||
border-radius:var(--r-2);
|
||||
padding:var(--s-6);
|
||||
box-shadow:0 4px 20px rgba(0,0,0,.25);
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Appendix C — `/css/components.css`
|
||||
```css
|
||||
/* Buttons */
|
||||
.btn{ display:inline-flex; align-items:center; justify-content:center; gap:8px;
|
||||
border-radius:var(--r-1); padding:10px 16px; border:1px solid transparent; cursor:pointer;
|
||||
background:var(--pri); color:#111; font-weight:600; text-align:center; }
|
||||
.btn:hover{ filter:brightness(1.05); }
|
||||
.btn--subtle{ background:transparent; color:var(--tx-0); border-color:rgba(185,236,255,.2); }
|
||||
.btn--ghost{ background:transparent; color:var(--pri); border-color:transparent; }
|
||||
.btn:disabled{ opacity:.6; cursor:not-allowed; }
|
||||
|
||||
/* Inputs */
|
||||
.input{ width:100%; background:#0e1319; color:var(--tx-0);
|
||||
border:1px solid rgba(185,236,255,.18); border-radius:var(--r-1);
|
||||
padding:12px 14px; }
|
||||
.input::placeholder{ color:#6f7b86; }
|
||||
|
||||
/* Badges */
|
||||
.badge{ display:inline-block; padding:2px 8px; border-radius:999px; font-size:12px; border:1px solid rgba(185,236,255,.2); }
|
||||
.badge--ok{ color:#0c2; border-color:rgba(61,220,151,.4); }
|
||||
.badge--warn{ color:#fc0; border-color:rgba(255,204,0,.4); }
|
||||
.badge--err{ color:#f66; border-color:rgba(255,77,77,.4); }
|
||||
|
||||
/* Toasts */
|
||||
#toast-root{ position:fixed; left:50%; bottom:max(16px, env(safe-area-inset-bottom)); transform:translateX(-50%); z-index:9999; }
|
||||
.toast{ background:var(--bg-1); color:var(--tx-0);
|
||||
border:1px solid rgba(185,236,255,.2); padding:var(--s-5) var(--s-6);
|
||||
border-radius:var(--r-2); box-shadow:0 10px 30px rgba(0,0,0,.35);
|
||||
margin-bottom:var(--s-4); max-width:min(420px, 90vw);
|
||||
opacity:0; translate:0 8px; animation:toast-in .24s ease-out forwards; }
|
||||
.toast--ok{ border-color:rgba(61,220,151,.35); }
|
||||
.toast--warn{ border-color:rgba(255,204,0,.35); }
|
||||
.toast--err{ border-color:rgba(255,77,77,.35); }
|
||||
@keyframes toast-in{ to{ opacity:1; translate:0 0; } }
|
||||
```
|
||||
|
||||
|
||||
|
||||
---
|
||||
|
||||
## Appendix D — `/js/toast.js`
|
||||
```js
|
||||
export function showToast(msg, type = "ok", ms = 3500) {
|
||||
const root = document.getElementById("toast-root") || (() => {
|
||||
const r = document.createElement("div");
|
||||
r.id = "toast-root";
|
||||
document.body.appendChild(r);
|
||||
return r;
|
||||
})();
|
||||
|
||||
const el = document.createElement("div");
|
||||
el.className = `toast toast--${type}`;
|
||||
el.role = "status";
|
||||
el.textContent = msg;
|
||||
root.appendChild(el);
|
||||
|
||||
const t = setTimeout(() => el.remove(), ms);
|
||||
el.addEventListener("mouseenter", () => clearTimeout(t));
|
||||
el.addEventListener("click", () => el.remove());
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Appendix E — `/js/notify.js`
|
||||
```js
|
||||
export async function ensureNotifyPermission() {
|
||||
if (!("Notification" in window)) return false;
|
||||
if (Notification.permission === "granted") return true;
|
||||
if (Notification.permission === "denied") return false;
|
||||
const res = await Notification.requestPermission();
|
||||
return res === "granted";
|
||||
}
|
||||
|
||||
export function notify(opts) {
|
||||
if (Notification.permission !== "granted") return;
|
||||
const n = new Notification(opts.title || "Update", {
|
||||
body: opts.body || "",
|
||||
icon: opts.icon || "/icons/notify.png",
|
||||
tag: opts.tag || "app-event",
|
||||
requireInteraction: !!opts.sticky
|
||||
});
|
||||
if (opts.onclick) n.addEventListener("click", opts.onclick);
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
---
|
||||
|
||||
## Appendix F — `/css/pages/dashboard.css`
|
||||
```css
|
||||
/* Dashboard specific styles */
|
||||
.dashboard-header{
|
||||
margin-bottom:var(--s-6);
|
||||
display:flex; align-items:center; justify-content:space-between;
|
||||
}
|
||||
.dashboard-header h1{ font-size:28px; color:var(--pri); }
|
||||
|
||||
.stats-grid{
|
||||
display:grid;
|
||||
gap:var(--s-5);
|
||||
grid-template-columns:repeat(auto-fit, minmax(160px, 1fr));
|
||||
}
|
||||
.stat-card{
|
||||
background:var(--bg-1);
|
||||
border:1px solid rgba(185,236,255,.12);
|
||||
border-radius:var(--r-2);
|
||||
padding:var(--s-5);
|
||||
text-align:center;
|
||||
}
|
||||
.stat-card h2{ margin:0; font-size:20px; color:var(--ice); }
|
||||
.stat-card p{ margin:var(--s-2) 0 0; color:var(--tx-1); font-size:14px; }
|
||||
|
||||
.activity-feed{
|
||||
margin-top:var(--s-7);
|
||||
}
|
||||
.activity-item{
|
||||
border-bottom:1px solid rgba(185,236,255,.1);
|
||||
padding:var(--s-4) 0;
|
||||
display:flex; align-items:center; gap:var(--s-4);
|
||||
}
|
||||
.activity-item:last-child{ border-bottom:none; }
|
||||
.activity-icon{ width:32px; height:32px; border-radius:50%; background:var(--pri); display:flex; align-items:center; justify-content:center; color:#111; font-size:16px; }
|
||||
.activity-text{ flex:1; font-size:14px; color:var(--tx-0); }
|
||||
.activity-time{ font-size:12px; color:var(--tx-1); }
|
||||
@@ -1,237 +0,0 @@
|
||||
# marketplace-web
|
||||
|
||||
Web app for listing compute **offers**, placing **bids**, and viewing **market stats** in the AITBC stack.
|
||||
Stage-aware: works **now** against a mock API, later switches to **coordinator/pool-hub/blockchain** endpoints without touching the UI.
|
||||
|
||||
## Goals
|
||||
|
||||
1. Browse offers (GPU/CPU, price per token, location, queue, latency).
|
||||
2. Place/manage bids (instant or scheduled).
|
||||
3. Watch market stats (price trends, filled volume, miner capacity).
|
||||
4. Wallet view (balance, recent tx; read-only first).
|
||||
5. Internationalization (EU langs later), dark theme, 960px layout.
|
||||
|
||||
## Tech/Structure (Windsurf-friendly)
|
||||
|
||||
- Vanilla **TypeScript + Vite** (no React), separate JS/CSS files.
|
||||
- File layout (desktop 960px grid, mobile-first CSS, dark theme):
|
||||
|
||||
```
|
||||
marketplace-web/
|
||||
├─ public/
|
||||
│ ├─ icons/ # favicons, app icons
|
||||
│ └─ i18n/ # JSON dictionaries (en/… later)
|
||||
├─ src/
|
||||
│ ├─ app.ts # app bootstrap/router
|
||||
│ ├─ router.ts # hash-router (/, /offer/:id, /bids, /stats, /wallet)
|
||||
│ ├─ api/
|
||||
│ │ ├─ http.ts # fetch wrapper + baseURL swap (mock → real)
|
||||
│ │ └─ marketplace.ts # typed API calls
|
||||
│ ├─ store/
|
||||
│ │ ├─ state.ts # global app state (signals or tiny pubsub)
|
||||
│ │ └─ types.ts # shared types/interfaces
|
||||
│ ├─ views/
|
||||
│ │ ├─ HomeView.ts
|
||||
│ │ ├─ OfferDetailView.ts
|
||||
│ │ ├─ BidsView.ts
|
||||
│ │ ├─ StatsView.ts
|
||||
│ │ └─ WalletView.ts
|
||||
│ ├─ components/
|
||||
│ │ ├─ OfferCard.ts
|
||||
│ │ ├─ BidForm.ts
|
||||
│ │ ├─ Table.ts
|
||||
│ │ ├─ Sparkline.ts # minimal chart (no external lib)
|
||||
│ │ └─ Toast.ts
|
||||
│ ├─ styles/
|
||||
│ │ ├─ base.css # reset, variables, dark theme
|
||||
│ │ ├─ layout.css # 960px grid, sections, header/footer
|
||||
│ │ └─ components.css
|
||||
│ └─ util/
|
||||
│ ├─ format.ts # fmt token, price, time
|
||||
│ ├─ validate.ts # input validation
|
||||
│ └─ i18n.ts # simple t() loader
|
||||
├─ index.html
|
||||
├─ vite.config.ts
|
||||
└─ README.md
|
||||
```
|
||||
|
||||
## Routes (Hash router)
|
||||
|
||||
- `/` — Offer list + filters
|
||||
- `/offer/:id` — Offer details + **BidForm**
|
||||
- `/bids` — User bids (open, filled, cancelled)
|
||||
- `/stats` — Price/volume/capacity charts
|
||||
- `/wallet` — Balance + last 10 tx (read-only)
|
||||
|
||||
## UI/UX Spec
|
||||
|
||||
- **Dark theme**, accent = ice-blue/white outlines (fits OIB style).
|
||||
- **960px max width** desktop, mobile-first, Nothing Phone 2a as reference.
|
||||
- **Toast** bottom-center for actions.
|
||||
- Forms: no animations, clear validation, disable buttons during submit.
|
||||
|
||||
## Data Types (minimal)
|
||||
|
||||
```ts
|
||||
type TokenAmount = `${number}`; // keep as string to avoid FP errors
|
||||
type PricePerToken = `${number}`;
|
||||
|
||||
interface Offer {
|
||||
id: string;
|
||||
provider: string; // miner or pool label
|
||||
hw: { gpu: string; vramGB?: number; cpu?: string };
|
||||
region: string; // e.g., eu-central
|
||||
queue: number; // jobs waiting
|
||||
latencyMs: number;
|
||||
price: PricePerToken; // AIToken per 1k tokens processed
|
||||
minTokens: number;
|
||||
maxTokens: number;
|
||||
updatedAt: string;
|
||||
}
|
||||
|
||||
interface BidInput {
|
||||
offerId: string;
|
||||
tokens: number; // requested tokens to process
|
||||
maxPrice: PricePerToken; // cap
|
||||
}
|
||||
|
||||
interface Bid extends BidInput {
|
||||
id: string;
|
||||
status: "open" | "filled" | "cancelled" | "expired";
|
||||
createdAt: string;
|
||||
filledTokens?: number;
|
||||
avgFillPrice?: PricePerToken;
|
||||
}
|
||||
|
||||
interface MarketStats {
|
||||
ts: string[];
|
||||
medianPrice: number[]; // per interval
|
||||
filledVolume: number[]; // tokens
|
||||
capacity: number[]; // available tokens
|
||||
}
|
||||
|
||||
interface Wallet {
|
||||
address: string;
|
||||
balance: TokenAmount;
|
||||
recent: Array<{ id: string; kind: "mint"|"spend"|"refund"; amount: TokenAmount; at: string }>;
|
||||
}
|
||||
```
|
||||
|
||||
## Mock API (Stage 0)
|
||||
|
||||
Base URL: `/.mock` (served via Vite dev middleware or static JSON)
|
||||
|
||||
- `GET /.mock/offers.json` → `Offer[]`
|
||||
- `GET /.mock/offers/:id.json` → `Offer`
|
||||
- `POST /.mock/bids` (body `BidInput`) → `Bid`
|
||||
- `GET /.mock/bids.json` → `Bid[]`
|
||||
- `GET /.mock/stats.json` → `MarketStats`
|
||||
- `GET /.mock/wallet.json` → `Wallet`
|
||||
|
||||
Switch to real endpoints by changing **`BASE_URL`** in `api/http.ts`.
|
||||
|
||||
## Real API (Stage 2/3/4 wiring)
|
||||
|
||||
When coordinator/pool-hub/blockchain are ready:
|
||||
|
||||
- `GET /api/market/offers` → `Offer[]`
|
||||
- `GET /api/market/offers/:id` → `Offer`
|
||||
- `POST /api/market/bids` → create bid, returns `Bid`
|
||||
- `GET /api/market/bids?owner=<wallet>` → `Bid[]`
|
||||
- `GET /api/market/stats?range=24h|7d|30d` → `MarketStats`
|
||||
- `GET /api/wallet/summary?addr=<wallet>` → `Wallet`
|
||||
|
||||
Auth header (later): `Authorization: Bearer <session-or-wallet-token>`.
|
||||
|
||||
## State & Caching
|
||||
|
||||
- In-memory store (`store/state.ts`) with tiny pub/sub.
|
||||
- Offer list cached 30s; stats cached 60s; bust on route change if stale.
|
||||
- Optimistic UI for **Bid** create; reconcile on server response.
|
||||
|
||||
## Filters (Home)
|
||||
|
||||
- Region (multi)
|
||||
- HW capability (GPU model, min VRAM, CPU present)
|
||||
- Price range (slider)
|
||||
- Latency max (ms)
|
||||
- Queue max
|
||||
|
||||
All filters are client-side over fetched offers (server-side later).
|
||||
|
||||
## Validation Rules
|
||||
|
||||
- `BidInput.tokens` ∈ `[offer.minTokens, offer.maxTokens]`.
|
||||
- `maxPrice >= offer.price` for instant fill hint; otherwise place as limit.
|
||||
- Warn if `queue > threshold` or `latencyMs > threshold`.
|
||||
|
||||
## Security Notes (Web)
|
||||
|
||||
- Input sanitize; never eval.
|
||||
- CSRF not needed for read-only; for POST use standard token once auth exists.
|
||||
- Rate-limit POST (server).
|
||||
- Display wallet **read-only** unless signing is integrated (later via wallet-daemon).
|
||||
|
||||
## i18n
|
||||
|
||||
- `public/i18n/en.json` as root.
|
||||
- `util/i18n.ts` provides `t(key, params?)`.
|
||||
- Keys only (no concatenated sentences). EU languages can be added later via your i18n tool.
|
||||
|
||||
## Accessibility
|
||||
|
||||
- Semantic HTML, label every input, focus states visible.
|
||||
- Keyboard: Tab order, Enter submits forms, Esc closes dialogs.
|
||||
- Color contrast AA in dark theme.
|
||||
|
||||
## Minimal Styling Rules
|
||||
|
||||
- `styles/base.css`: CSS variables for colors, spacing, radius.
|
||||
- `styles/layout.css`: header, main container (max-width: 960px), grid for cards.
|
||||
- `styles/components.css`: OfferCard, Table, Buttons, Toast.
|
||||
|
||||
## Testing
|
||||
|
||||
- Manual first:
|
||||
- Offers list loads, filters act.
|
||||
- Place bid with edge values.
|
||||
- Stats sparkline renders with missing points.
|
||||
- Later: Vitest for `util/` + `api/` modules.
|
||||
|
||||
## Env/Config
|
||||
|
||||
- `VITE_API_BASE` → mock or real.
|
||||
- `VITE_DEFAULT_REGION` → optional default filter.
|
||||
- `VITE_FEATURE_WALLET=readonly|disabled`.
|
||||
|
||||
## Build/Run
|
||||
|
||||
```
|
||||
# dev
|
||||
npm i
|
||||
npm run dev
|
||||
|
||||
# build
|
||||
npm run build
|
||||
npm run preview
|
||||
```
|
||||
|
||||
## Migration Checklist (Mock → Real)
|
||||
|
||||
1. Replace `VITE_API_BASE` with coordinator gateway URL.
|
||||
2. Enable auth header injection when session is present.
|
||||
3. Wire `/wallet` to wallet-daemon read endpoint.
|
||||
4. Swap stats source to real telemetry.
|
||||
5. Keep the same types; server must honor them.
|
||||
|
||||
## Open Tasks
|
||||
|
||||
- [ ] Create file skeletons per structure above.
|
||||
- [ ] Add mock JSON under `public/.mock/`.
|
||||
- [ ] Implement OfferCard + filters.
|
||||
- [ ] Implement BidForm with validation + optimistic UI.
|
||||
- [ ] Implement StatsView with `Sparkline` (no external chart lib).
|
||||
- [ ] Wire `VITE_API_BASE` switch.
|
||||
- [ ] Basic a11y pass + dark theme polish.
|
||||
- [ ] Wallet view (read-only).
|
||||
|
||||
@@ -1,423 +0,0 @@
|
||||
# AITBC Miner – Windsurf Boot & Ops Guide
|
||||
|
||||
A minimal, production‑lean starter for bringing an **AITBC compute‑miner** online on Debian (Bookworm/Trixie). It is optimized for NVIDIA GPUs with CUDA support, yet safe to run CPU‑only. The miner polls jobs from a central **Coordinator API** on behalf of clients, executes AI workloads, generates proofs, and earns tokens. Payments are credited to the configured wallet.
|
||||
|
||||
---
|
||||
|
||||
## Flow Diagram
|
||||
```
|
||||
[ Client ] → submit job → [ Coordinator API ] → dispatch → [ Miner ] → proof → [ Coordinator API ] → credit → [ Wallet ]
|
||||
```
|
||||
- **Client**: User or application requesting AI computation.
|
||||
- **Coordinator API**: Central dispatcher that manages jobs and miners.
|
||||
- **Miner**: Executes the AI workload, generates proofs, and submits results.
|
||||
- **Wallet**: Receives token rewards for completed jobs.
|
||||
|
||||
---
|
||||
|
||||
## Quickstart: Windsurf Fast Boot
|
||||
The minimal info Windsurf needs to spin everything up quickly:
|
||||
|
||||
1. **Minimal Config Values** (edit `/etc/aitbc/miner.conf`):
|
||||
- `COORD_URL` (use mock for local dev): `http://127.0.0.1:8080`
|
||||
- `WALLET_ADDR` (any test string for mock): `wallet_demo`
|
||||
- `API_KEY` (mock ignores, still set one): `CHANGE_ME`
|
||||
- `MINER_ID`: `$(hostname)-gpu0`
|
||||
2. **Dependencies**
|
||||
```bash
|
||||
apt update
|
||||
apt install -y python3 python3-venv python3-pip curl jq ca-certificates git pciutils lsb-release
|
||||
# mock coordinator deps
|
||||
pip install fastapi uvicorn
|
||||
```
|
||||
- **GPU optional**: ensure `nvidia-smi` works for CUDA path.
|
||||
3. **Boot the Mock Coordinator** (new terminal):
|
||||
```bash
|
||||
uvicorn mock_coordinator:app --reload --host 127.0.0.2 --port 8080
|
||||
```
|
||||
4. **Install & Start Miner**
|
||||
```bash
|
||||
/root/scripts/aitbc-miner/install_miner.sh
|
||||
systemctl start aitbc-miner.service
|
||||
```
|
||||
5. **Verify**
|
||||
```bash
|
||||
systemctl status aitbc-miner.service
|
||||
tail -f /var/log/aitbc-miner.log
|
||||
curl -s http://127.0.0.1:8080/v1/wallet/balance | jq
|
||||
```
|
||||
|
||||
> With these details, Windsurf can boot both the miner and the mock Coordinator in under a minute without a production backend.
|
||||
|
||||
---
|
||||
|
||||
## CUDA Support
|
||||
Yes, the miner supports CUDA GPUs. The installer checks for `nvidia-smi` and, if present, attempts to install PyTorch with CUDA wheels (`cu124`). At runtime, tensors are placed on `'cuda'` if `torch.cuda.is_available()` is true. If no GPU is detected, the miner automatically falls back to CPU mode.
|
||||
|
||||
**Prerequisites for CUDA:**
|
||||
- Install NVIDIA drivers on Debian:
|
||||
```bash
|
||||
apt install -y nvidia-driver nvidia-smi
|
||||
```
|
||||
- Optional: Install CUDA toolkit if required for advanced workloads:
|
||||
```bash
|
||||
apt install -y nvidia-cuda-toolkit
|
||||
```
|
||||
- Verify with:
|
||||
```bash
|
||||
nvidia-smi
|
||||
```
|
||||
|
||||
Make sure drivers/toolkit are installed before running the miner installer.
|
||||
|
||||
---
|
||||
|
||||
## 1) Targets & Assumptions
|
||||
- Host: Debian 12/13, root shell, `zsh` available.
|
||||
- Optional GPU: NVIDIA (e.g. RTX 4060 Ti) with CUDA toolchain.
|
||||
- Network egress to Coordinator API (HTTPS). No inbound ports required.
|
||||
- File paths align with user conventions.
|
||||
|
||||
---
|
||||
|
||||
## 2) Directory Layout
|
||||
```
|
||||
/root/scripts/aitbc-miner/ # scripts
|
||||
install_miner.sh
|
||||
miner.sh
|
||||
/etc/aitbc/
|
||||
miner.conf # runtime config (env‑like)
|
||||
/var/log/aitbc-miner.log # log file (rotated by logrotate, optional)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3) Config File: `/etc/aitbc/miner.conf`
|
||||
Environment‑style key/values. Edit with your Coordinator endpoint and wallet/API key.
|
||||
|
||||
```ini
|
||||
# AITBC Miner config
|
||||
COORD_URL="https://coordinator.example.net" # Coordinator base URL
|
||||
WALLET_ADDR="wallet1qxy2kgdygjrsqtzq2n0yrf..." # Your payout address
|
||||
API_KEY="REPLACE_WITH_WALLET_API_KEY" # Wallet‑issued key for auth
|
||||
MINER_ID="$(hostname)-gpu0" # Any stable node label
|
||||
WORK_DIR="/tmp/aitbc-work" # Scratch space
|
||||
HEARTBEAT_SECS=20 # Health ping interval
|
||||
JOB_POLL_SECS=3 # Fetch cadence
|
||||
MAX_CONCURRENCY=1 # Inference job slots
|
||||
# GPU modes: auto|gpu|cpu
|
||||
ACCEL_MODE="auto"
|
||||
```
|
||||
|
||||
> **Tip:** Store secrets with `chmod 600 /etc/aitbc/miner.conf`.
|
||||
|
||||
---
|
||||
|
||||
## 4) Installer Script: `/root/scripts/aitbc-miner/install_miner.sh`
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# Script Version: 01
|
||||
# Description: Install AITBC miner runtime (deps, folders, service)
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
LOG_FILE=/var/log/aitbc-miner.log
|
||||
|
||||
mkdir -p /root/scripts/aitbc-miner /etc/aitbc
|
||||
: > "$LOG_FILE"
|
||||
chmod 600 "$LOG_FILE"
|
||||
|
||||
# Base deps
|
||||
apt update
|
||||
apt install -y curl jq ca-certificates coreutils procps python3 python3-venv python3-pip git \
|
||||
pciutils lsb-release
|
||||
|
||||
# Optional: NVIDIA CLI utils detection (no failure if absent)
|
||||
if command -v nvidia-smi >/dev/null 2>&1; then
|
||||
echo "[INFO] NVIDIA detected" | tee -a "$LOG_FILE"
|
||||
else
|
||||
echo "[INFO] NVIDIA not detected, will run CPU mode if configured" | tee -a "$LOG_FILE"
|
||||
fi
|
||||
|
||||
# Python env for exemplar workloads (torch optional)
|
||||
VENV_DIR=/opt/aitbc-miner/.venv
|
||||
mkdir -p /opt/aitbc-miner
|
||||
python3 -m venv "$VENV_DIR"
|
||||
source "$VENV_DIR/bin/activate"
|
||||
|
||||
# Minimal runtime deps
|
||||
pip install --upgrade pip wheel
|
||||
# Try torch (GPU if CUDA present; fallback CPU). Best‑effort only.
|
||||
python - <<'PY'
|
||||
import os, sys
|
||||
try:
|
||||
import subprocess
|
||||
cuda_ok = subprocess.call(["bash","-lc","nvidia-smi >/dev/null 2>&1"])==0
|
||||
pkg = "--index-url https://download.pytorch.org/whl/cu124 torch torchvision torchaudio" if cuda_ok else "torch torchvision torchaudio"
|
||||
os.system(f"pip install -q {pkg}")
|
||||
print("[INFO] torch installed")
|
||||
except Exception as e:
|
||||
print("[WARN] torch install skipped:", e)
|
||||
PY
|
||||
|
||||
# Place default config if missing
|
||||
CONF=/etc/aitbc/miner.conf
|
||||
if [ ! -f "$CONF" ]; then
|
||||
cat >/etc/aitbc/miner.conf <<'CFG'
|
||||
COORD_URL="https://coordinator.example.net"
|
||||
WALLET_ADDR="wallet_demo"
|
||||
API_KEY="CHANGE_ME"
|
||||
MINER_ID="demo-node"
|
||||
WORK_DIR="/tmp/aitbc-work"
|
||||
HEARTBEAT_SECS=20
|
||||
JOB_POLL_SECS=3
|
||||
MAX_CONCURRENCY=1
|
||||
ACCEL_MODE="auto"
|
||||
CFG
|
||||
chmod 600 /etc/aitbc/miner.conf
|
||||
echo "[INFO] Wrote /etc/aitbc/miner.conf" | tee -a "$LOG_FILE"
|
||||
fi
|
||||
|
||||
# Install service unit
|
||||
cat >/etc/systemd/system/aitbc-miner.service <<'UNIT'
|
||||
[Unit]
|
||||
Description=AITBC Compute Miner
|
||||
After=network-online.target
|
||||
Wants=network-online.target
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
ExecStart=/root/scripts/aitbc-miner/miner.sh
|
||||
Restart=always
|
||||
RestartSec=3
|
||||
EnvironmentFile=/etc/aitbc/miner.conf
|
||||
StandardOutput=append:/var/log/aitbc-miner.log
|
||||
StandardError=append:/var/log/aitbc-miner.log
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
UNIT
|
||||
|
||||
systemctl daemon-reload
|
||||
systemctl enable --now aitbc-miner.service
|
||||
|
||||
echo "[INFO] AITBC miner installed and started" | tee -a "$LOG_FILE"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5) Miner Runtime: `/root/scripts/aitbc-miner/miner.sh`
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# Script Version: 01
|
||||
# Description: AITBC miner main loop (poll, run, prove, earn)
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
LOG_FILE=/var/log/aitbc-miner.log
|
||||
: > "$LOG_FILE"
|
||||
|
||||
# ========
|
||||
# Helpers
|
||||
# ========
|
||||
log(){ printf '%s %s\n' "$(date -Is)" "$*" | tee -a "$LOG_FILE"; }
|
||||
req(){
|
||||
local method="$1" path="$2" data="${3:-}"
|
||||
local url="${COORD_URL%/}${path}"
|
||||
if [ -n "$data" ]; then
|
||||
curl -fsS -X "$method" "$url" \
|
||||
-H "Authorization: Bearer $API_KEY" -H 'Content-Type: application/json' \
|
||||
--data "$data"
|
||||
else
|
||||
curl -fsS -X "$method" "$url" -H "Authorization: Bearer $API_KEY"
|
||||
fi
|
||||
}
|
||||
|
||||
has_gpu(){ command -v nvidia-smi >/dev/null 2>&1 && nvidia-smi >/dev/null 2>&1; }
|
||||
|
||||
accel_mode(){
|
||||
case "$ACCEL_MODE" in
|
||||
gpu) has_gpu && echo gpu || echo cpu ;;
|
||||
cpu) echo cpu ;;
|
||||
auto) has_gpu && echo gpu || echo cpu ;;
|
||||
*) echo cpu ;;
|
||||
esac
|
||||
}
|
||||
|
||||
run_job(){
|
||||
local job_json="$1"
|
||||
local job_id; job_id=$(echo "$job_json" | jq -r '.id')
|
||||
local workload; workload=$(echo "$job_json" | jq -r '.workload')
|
||||
local mode; mode=$(accel_mode)
|
||||
|
||||
log "[JOB] start id=$job_id mode=$mode type=$workload"
|
||||
mkdir -p "$WORK_DIR/$job_id"
|
||||
|
||||
# Example workload types: "torch_bench", "sd_infer", "llm_gen"...
|
||||
case "$workload" in
|
||||
torch_bench)
|
||||
/bin/bash -lc "source /opt/aitbc-miner/.venv/bin/activate && python - <<'PY'
|
||||
import time, torch
|
||||
start=time.time()
|
||||
x=torch.randn(1024,1024,device='cuda' if torch.cuda.is_available() else 'cpu')
|
||||
for _ in range(100): x=x@x
|
||||
elapsed=time.time()-start
|
||||
print(f"throughput_ops={100}, seconds={elapsed:.4f}")
|
||||
PY" | tee -a "$LOG_FILE" >"$WORK_DIR/$job_id/out.txt"
|
||||
;;
|
||||
sd_infer)
|
||||
echo "stub: run stable diffusion pipeline here" | tee -a "$LOG_FILE" >"$WORK_DIR/$job_id/out.txt"
|
||||
;;
|
||||
llm_gen)
|
||||
echo "stub: run text generation here" | tee -a "$LOG_FILE" >"$WORK_DIR/$job_id/out.txt"
|
||||
;;
|
||||
*)
|
||||
echo "unknown workload" >"$WORK_DIR/$job_id/out.txt" ;;
|
||||
esac
|
||||
|
||||
# Build a minimal proof (hash of outputs + metrics placeholder)
|
||||
local proof; proof=$(jq -n --arg id "$job_id" \
|
||||
--arg mode "$mode" \
|
||||
--arg out_sha "$(sha256sum "$WORK_DIR/$job_id/out.txt" | awk '{print $1}')" \
|
||||
'{id:$id, mode:$mode, output_sha:$out_sha, metrics:{}}')
|
||||
|
||||
req POST "/v1/miner/proof" "$proof" >/dev/null
|
||||
log "[JOB] done id=$job_id proof_submitted"
|
||||
}
|
||||
|
||||
heartbeat(){
|
||||
local mode; mode=$(accel_mode)
|
||||
local gpu; gpu=$(has_gpu && echo 1 || echo 0)
|
||||
req POST "/v1/miner/heartbeat" "$(jq -n \
|
||||
--arg id "$MINER_ID" --arg w "$WALLET_ADDR" --arg mode "$mode" \
|
||||
--argjson gpu "$gpu" '{miner_id:$id,wallet:$w,mode:$mode,gpu:$gpu}')" >/dev/null
|
||||
}
|
||||
|
||||
# ========
|
||||
# Main Process
|
||||
# ========
|
||||
log "[BOOT] AITBC miner starting (id=$MINER_ID)"
|
||||
mkdir -p "$WORK_DIR"
|
||||
|
||||
# Prime heartbeat
|
||||
heartbeat || log "[WARN] initial heartbeat failed"
|
||||
|
||||
# Poll/execute loop
|
||||
while true; do
|
||||
sleep "$JOB_POLL_SECS"
|
||||
# Opportunistic heartbeat
|
||||
heartbeat || true
|
||||
|
||||
# Fetch one job
|
||||
if J=$(req GET "/v1/miner/next?miner=$MINER_ID&slots=$MAX_CONCURRENCY" 2>/dev/null); then
|
||||
echo "$J" | jq -e '.id' >/dev/null 2>&1 || continue
|
||||
run_job "$J" || log "[WARN] job failed"
|
||||
fi
|
||||
|
||||
done
|
||||
```
|
||||
|
||||
Make executable:
|
||||
```
|
||||
chmod +x /root/scripts/aitbc-miner/install_miner.sh /root/scripts/aitbc-miner/miner.sh
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6) Bootstrap
|
||||
1. Create folders + drop the three files above.
|
||||
2. Edit `/etc/aitbc/miner.conf` with real values.
|
||||
3. Run installer:
|
||||
```
|
||||
/root/scripts/aitbc-miner/install_miner.sh
|
||||
```
|
||||
4. Check status & logs:
|
||||
```
|
||||
systemctl status aitbc-miner.service
|
||||
tail -f /var/log/aitbc-miner.log
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 7) Health & Debug
|
||||
- Quick GPU sanity: `nvidia-smi` (optional).
|
||||
- Liveness: periodic `/v1/miner/heartbeat` pings.
|
||||
- Job smoke test (coordinator): ensure `/v1/miner/next` returns a JSON job.
|
||||
|
||||
---
|
||||
|
||||
## 8) Logrotate (optional)
|
||||
Create `/etc/logrotate.d/aitbc-miner`:
|
||||
```conf
|
||||
/var/log/aitbc-miner.log {
|
||||
rotate 7
|
||||
daily
|
||||
missingok
|
||||
notifempty
|
||||
compress
|
||||
copytruncate
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 9) Security Notes
|
||||
- Keep `API_KEY` scoped to miner ops with revocation.
|
||||
- No inbound ports required; allow egress HTTPS only.
|
||||
- Consider systemd `ProtectSystem=full`, `ProtectHome=yes`, `NoNewPrivileges=yes` hardening once stable.
|
||||
|
||||
---
|
||||
|
||||
## 10) Extending Workloads
|
||||
- Implement real `sd_infer` (Stable Diffusion) and `llm_gen` via the venv.
|
||||
- Add job-level resource caps (VRAM/CPU-time) from Coordinator parameters.
|
||||
- Attach accounting metrics for reward weighting (e.g., `tokens_per_kJ` or `tokens_per_TFLOP_s`).
|
||||
|
||||
---
|
||||
|
||||
## 11) Common Commands
|
||||
```
|
||||
# restart after config edits
|
||||
systemctl restart aitbc-miner.service
|
||||
|
||||
# follow logs
|
||||
journalctl -u aitbc-miner -f
|
||||
|
||||
# disable autostart
|
||||
systemctl disable --now aitbc-miner.service
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 12) Coordinator API v1 (Detailed)
|
||||
|
||||
This section specifies the **Miner-facing** HTTP API. All endpoints are versioned under `/v1/` and use **JSON** over **HTTPS**. Authentication is via `Authorization: Bearer <API_KEY>` issued by the wallet/coordinator.
|
||||
|
||||
### 12.1 Global
|
||||
- **Base URL**: `${COORD_URL}` (e.g., `https://coordinator.example.net`).
|
||||
- **Content-Type**: `application/json; charset=utf-8`.
|
||||
- **Auth**: `Authorization: Bearer <API_KEY>` (scoped for miner ops).
|
||||
- **Idempotency** *(recommended)*: `Idempotency-Key: <uuid4>` for POSTs.
|
||||
- **Clock**: All timestamps are ISO‑8601 UTC (e.g., `2025-09-26T13:37:00Z`).
|
||||
- **Errors**: Non‑2xx responses return a body:
|
||||
```json
|
||||
{ "error": { "code": "STRING_CODE", "message": "human readable", "details": {"field": "optional context"} } }
|
||||
```
|
||||
- **Common HTTP codes**: `200 OK`, `201 Created`, `204 No Content`, `400 Bad Request`, `401 Unauthorized`, `403 Forbidden`, `404 Not Found`, `409 Conflict`, `422 Unprocessable Entity`, `429 Too Many Requests`, `500/502/503`.
|
||||
|
||||
---
|
||||
|
||||
### 12.2 Types
|
||||
|
||||
#### MinerCapabilities
|
||||
```json
|
||||
{
|
||||
"miner_id": "string",
|
||||
"mode": "gpu|cpu",
|
||||
"gpu": true,
|
||||
"concurrency": 1,
|
||||
"workloads": ["torch_bench", "sd_infer", "llm_gen"],
|
||||
"limits": {"vram_mb": 16000, "ram_mb": 32768, "max_runtime_s
|
||||
|
||||
@@ -1,412 +0,0 @@
|
||||
# miner-node/ — Worker Node Daemon for GPU/CPU Tasks
|
||||
|
||||
> **Goal:** Implement a Docker‑free worker daemon that connects to the Coordinator API, advertises capabilities (CPU/GPU), fetches jobs, executes them in a sandboxed workspace, and streams results/metrics back.
|
||||
|
||||
---
|
||||
|
||||
## 1) Scope & MVP
|
||||
|
||||
**MVP Features**
|
||||
- Node registration with Coordinator (auth token + capability descriptor).
|
||||
- Heartbeat & liveness (interval ± jitter, backoff on failure).
|
||||
- Job fetch → ack → execute → upload result → finalize.
|
||||
- Two runner types:
|
||||
- **CLI runner**: executes a provided command with arguments (whitelist‑based).
|
||||
- **Python runner**: executes a trusted task module with parameters.
|
||||
- CPU/GPU capability detection (CUDA, VRAM, driver info) without Docker.
|
||||
- Sandboxed working dir per job under `/var/lib/aitbc/miner/jobs/<job-id>`.
|
||||
- Resource controls (nice/ionice/ulimit; optional cgroup v2 if present).
|
||||
- Structured JSON logging and minimal metrics.
|
||||
|
||||
**Post‑MVP**
|
||||
- Chunked artifact upload; resumable transfers.
|
||||
- Prometheus `/metrics` endpoint (pull).
|
||||
- GPU multi‑card scheduling & fractional allocation policy.
|
||||
- On‑node model cache management (size, eviction, pinning).
|
||||
- Signed task manifests & attestation of execution.
|
||||
- Secure TMPFS for secrets; hardware key support (YubiKey).
|
||||
|
||||
---
|
||||
|
||||
## 2) High‑Level Architecture
|
||||
|
||||
```
|
||||
client → coordinator-api → miner-node(s) → results store → coordinator-api → client
|
||||
```
|
||||
|
||||
Miner components:
|
||||
- **Agent** (control loop): registration, heartbeat, fetch/dispatch, result reporting.
|
||||
- **Capability Probe**: CPU/GPU inventory (CUDA, VRAM), free RAM/disk, load.
|
||||
- **Schedulers**: simple FIFO for MVP; one job per GPU or CPU slot.
|
||||
- **Runners**: CLI runner & Python runner.
|
||||
- **Sandbox**: working dirs, resource limits, network I/O gating (optional), file allowlist.
|
||||
- **Telemetry**: JSON logs, minimal metrics; per‑job timeline.
|
||||
|
||||
---
|
||||
|
||||
## 3) Directory Layout (on node)
|
||||
|
||||
```
|
||||
/var/lib/aitbc/miner/
|
||||
├─ jobs/
|
||||
│ ├─ <job-id>/
|
||||
│ │ ├─ input/
|
||||
│ │ ├─ work/
|
||||
│ │ ├─ output/
|
||||
│ │ └─ logs/
|
||||
├─ cache/ # model/assets cache (optional)
|
||||
└─ tmp/
|
||||
/etc/aitbc/miner/
|
||||
├─ config.yaml
|
||||
└─ allowlist.d/ # allowed CLI programs & argument schema snippets
|
||||
/var/log/aitbc/miner/
|
||||
/usr/local/lib/aitbc/miner/ # python package venv install target
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4) Config (YAML)
|
||||
|
||||
```yaml
|
||||
node_id: "node-<shortid>"
|
||||
coordinator:
|
||||
base_url: "https://coordinator.local/api/v1"
|
||||
auth_token: "env:MINER_AUTH" # read from env at runtime
|
||||
tls_verify: true
|
||||
timeout_s: 20
|
||||
|
||||
heartbeat:
|
||||
interval_s: 15
|
||||
jitter_pct: 10
|
||||
backoff:
|
||||
min_s: 5
|
||||
max_s: 120
|
||||
|
||||
runners:
|
||||
cli:
|
||||
enable: true
|
||||
allowlist_files:
|
||||
- "/etc/aitbc/miner/allowlist.d/ffmpeg.yaml"
|
||||
- "/etc/aitbc/miner/allowlist.d/whisper.yaml"
|
||||
python:
|
||||
enable: true
|
||||
task_paths:
|
||||
- "/usr/local/lib/aitbc/miner/tasks"
|
||||
venv: "/usr/local/lib/aitbc/miner/.venv"
|
||||
|
||||
resources:
|
||||
max_concurrent_cpu: 2
|
||||
max_concurrent_gpu: 1
|
||||
cpu_nice: 10
|
||||
io_class: "best-effort"
|
||||
io_level: 6
|
||||
mem_soft_mb: 16384
|
||||
|
||||
workspace:
|
||||
root: "/var/lib/aitbc/miner/jobs"
|
||||
keep_success: 24h
|
||||
keep_failed: 7d
|
||||
|
||||
logging:
|
||||
level: "info"
|
||||
json: true
|
||||
path: "/var/log/aitbc/miner/miner.jsonl"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5) Environment & Dependencies
|
||||
|
||||
- **OS:** Debian 12/13 (systemd).
|
||||
- **Python:** 3.11+ in venv under `/usr/local/lib/aitbc/miner/.venv`.
|
||||
- **Libraries:** `httpx`, `pydantic`, `uvloop` (optional), `pyyaml`, `psutil`.
|
||||
- **GPU (optional):** NVIDIA driver installed; `nvidia-smi` available; CUDA 12.x runtime on path for GPU tasks.
|
||||
|
||||
**Install skeleton**
|
||||
```
|
||||
python3 -m venv /usr/local/lib/aitbc/miner/.venv
|
||||
/usr/local/lib/aitbc/miner/.venv/bin/pip install --upgrade pip
|
||||
/usr/local/lib/aitbc/miner/.venv/bin/pip install httpx pydantic pyyaml psutil uvloop
|
||||
install -d /etc/aitbc/miner /var/lib/aitbc/miner/{jobs,cache,tmp} /var/log/aitbc/miner
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6) Systemd Service
|
||||
|
||||
**/etc/systemd/system/aitbc-miner.service**
|
||||
```
|
||||
[Unit]
|
||||
Description=AITBC Miner Node
|
||||
After=network-online.target
|
||||
Wants=network-online.target
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
Environment=MINER_AUTH=***REDACTED***
|
||||
ExecStart=/usr/local/lib/aitbc/miner/.venv/bin/python -m aitbc_miner --config /etc/aitbc/miner/config.yaml
|
||||
User=games
|
||||
Group=games
|
||||
# Lower CPU/IO priority by default
|
||||
Nice=10
|
||||
IOSchedulingClass=best-effort
|
||||
IOSchedulingPriority=6
|
||||
Restart=always
|
||||
RestartSec=5
|
||||
# Hardening
|
||||
NoNewPrivileges=true
|
||||
ProtectSystem=full
|
||||
ProtectHome=true
|
||||
PrivateTmp=true
|
||||
ReadWritePaths=/var/lib/aitbc/miner /var/log/aitbc/miner
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 7) Capability Probe (sent to Coordinator)
|
||||
|
||||
Example payload:
|
||||
```json
|
||||
{
|
||||
"node_id": "node-abc123",
|
||||
"version": "0.1.0",
|
||||
"cpu": {"cores": 16, "arch": "x86_64"},
|
||||
"memory_mb": 64000,
|
||||
"disk_free_mb": 250000,
|
||||
"gpu": [
|
||||
{
|
||||
"vendor": "nvidia", "name": "RTX 4060 Ti 16GB",
|
||||
"vram_mb": 16384,
|
||||
"cuda": {"version": "12.3", "driver": "545.23.06"}
|
||||
}
|
||||
],
|
||||
"runners": ["cli", "python"],
|
||||
"tags": ["debian", "cuda", "cpu"]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 8) Coordinator API Contract (MVP)
|
||||
|
||||
**Endpoints (HTTPS, JSON):**
|
||||
- `POST /nodes/register` → returns signed `node_token` (or 401)
|
||||
- `POST /nodes/heartbeat` → `{node_id, load, free_mb, gpu_free}` → 200
|
||||
- `POST /jobs/pull` → `{node_id, filters}` → `{job|none}`
|
||||
- `POST /jobs/ack` → `{job_id, node_id}` → 200
|
||||
- `POST /jobs/progress` → `{job_id, pct, note}` → 200
|
||||
- `POST /jobs/result` → multipart (metadata.json + artifacts/*) → 200
|
||||
- `POST /jobs/fail` → `{job_id, error_code, error_msg, logs_ref}` → 200
|
||||
|
||||
**Auth**
|
||||
- Bearer token in header (Node → Coordinator): `Authorization: Bearer <node_token>`
|
||||
- Coordinator signs `job.manifest` with HMAC(sha256) or Ed25519 (post‑MVP).
|
||||
|
||||
**Job manifest (subset)**
|
||||
```json
|
||||
{
|
||||
"job_id": "j-20250926-001",
|
||||
"runner": "cli",
|
||||
"requirements": {"gpu": true, "vram_mb": 12000, "cpu_threads": 4},
|
||||
"timeout_s": 3600,
|
||||
"input": {"urls": ["https://.../input1"], "inline": {"text": "..."}},
|
||||
"command": "ffmpeg",
|
||||
"args": ["-y", "-i", "input1.mp4", "-c:v", "libx264", "output.mp4"],
|
||||
"artifacts": [{"path": "output.mp4", "type": "video/mp4", "max_mb": 5000}]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 9) Runner Design
|
||||
|
||||
### CLI Runner
|
||||
- Validate `command` against allowlist (`/etc/aitbc/miner/allowlist.d/*.yaml`).
|
||||
- Validate `args` against per‑tool schema (regex & size caps).
|
||||
- Materialize inputs in job workspace; set `PATH`, `CUDA_VISIBLE_DEVICES`.
|
||||
- Launch via `subprocess.Popen` with `preexec_fn` applying `nice`, `ionice`, `setrlimit`.
|
||||
- Live‑tail stdout/stderr to `logs/exec.log`; throttle progress pings.
|
||||
|
||||
### Python Runner
|
||||
- Import trusted module `tasks.<name>:run(**params)` from configured paths.
|
||||
- Run in same venv; optional `venv per task` later.
|
||||
- Enforce timeouts; capture logs; write artifacts to `output/`.
|
||||
|
||||
---
|
||||
|
||||
## 10) Resource Controls (No Docker)
|
||||
|
||||
- **CPU:** `nice(10)`; optional cgroups v2 CPU.max if available.
|
||||
- **IO:** `ionice -c 2 -n 6` (best‑effort) for heavy disk ops.
|
||||
- **Memory:** `setrlimit(RLIMIT_AS)` soft cap; kill on OOM.
|
||||
- **GPU:** select by policy (least used VRAM). No hard memory partitioning in MVP.
|
||||
- **Network:** allowlist outbound hosts; deny by default (optional phase‑2).
|
||||
|
||||
---
|
||||
|
||||
## 11) Job Lifecycle (State Machine)
|
||||
|
||||
`IDLE → PULLING → ACKED → PREP → RUNNING → UPLOADING → DONE | FAILED | RETRY_WAIT`
|
||||
|
||||
- Retries: exponential backoff, max N; idempotent uploads.
|
||||
- On crash: on‑start recovery scans `jobs/*/state.json` and reconciles with Coordinator.
|
||||
|
||||
---
|
||||
|
||||
## 12) Logging & Metrics
|
||||
|
||||
- JSON lines in `/var/log/aitbc/miner/miner.jsonl` with fields: `ts, level, node_id, job_id, event, attrs{}`.
|
||||
- Optional `/healthz` (HTTP) returning 200 + brief status.
|
||||
- Future: Prometheus `/metrics` with gauges (queue, running, VRAM free, CPU load).
|
||||
|
||||
---
|
||||
|
||||
## 13) Security Model
|
||||
|
||||
- TLS required; pin CA or enable cert validation per env.
|
||||
- Node bootstrap token (`MINER_AUTH`) exchanged for `node_token` at registration.
|
||||
- Strict allowlist for CLI tools + args; size/time caps.
|
||||
- Secrets never written to disk unencrypted; pass via env vars or in‑memory.
|
||||
- Wipe workdirs on success (per policy); keep failed for triage.
|
||||
|
||||
---
|
||||
|
||||
## 14) Windsurf Implementation Plan
|
||||
|
||||
**Milestone 1 — Skeleton**
|
||||
1. `aitbc_miner/` package: `main.py`, `config.py`, `agent.py`, `probe.py`, `runners/{cli.py, python.py}`, `util/{limits.py, fs.py, log.py}`.
|
||||
2. Load YAML config, bootstrap logs, print probe JSON.
|
||||
3. Implement `/healthz` (optional FastAPI or bare aiohttp) for local checks.
|
||||
|
||||
**Milestone 2 — Control Loop**
|
||||
1. Register → store `node_token` (in memory only).
|
||||
2. Heartbeat task (async), backoff on network errors.
|
||||
3. Pull/ack & single‑slot executor; write state.json.
|
||||
|
||||
**Milestone 3 — Runners**
|
||||
1. CLI allowlist loader + validator; subprocess with limits.
|
||||
2. Python runner calling `tasks.example:run`.
|
||||
3. Upload artifacts via multipart; handle large files with chunking stub.
|
||||
|
||||
**Milestone 4 — Hardening & Ops**
|
||||
1. Crash recovery; cleanup policy; TTL sweeper.
|
||||
2. Metrics counters; structured logging fields.
|
||||
3. Systemd unit; install scripts; doc.
|
||||
|
||||
---
|
||||
|
||||
## 15) Minimal Allowlist Example (ffmpeg)
|
||||
|
||||
```yaml
|
||||
# /etc/aitbc/miner/allowlist.d/ffmpeg.yaml
|
||||
command:
|
||||
path: "/usr/bin/ffmpeg"
|
||||
args:
|
||||
- ["-y"]
|
||||
- ["-i", ".+\\.(mp4|wav|mkv)$"]
|
||||
- ["-c:v", "(libx264|copy)"]
|
||||
- ["-c:a", "(aac|copy)"]
|
||||
- ["-b:v", "[1-9][0-9]{2,5}k"]
|
||||
- ["output\\.(mp4|mkv)"]
|
||||
max_total_args_len: 4096
|
||||
max_runtime_s: 7200
|
||||
max_output_mb: 5000
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 16) Mock Coordinator (for local testing)
|
||||
|
||||
> Run a tiny dev server to hand out a single job and accept results.
|
||||
|
||||
```python
|
||||
# mock_coordinator.py (FastAPI)
|
||||
from fastapi import FastAPI, UploadFile, File, Form
|
||||
from pydantic import BaseModel
|
||||
|
||||
app = FastAPI()
|
||||
JOB = {
|
||||
"job_id": "j-local-1",
|
||||
"runner": "cli",
|
||||
"requirements": {"gpu": False},
|
||||
"timeout_s": 120,
|
||||
"command": "echo",
|
||||
"args": ["hello", "world"],
|
||||
"artifacts": [{"path": "output.txt", "type": "text/plain", "max_mb": 1}]
|
||||
}
|
||||
|
||||
class PullReq(BaseModel):
|
||||
node_id: str
|
||||
filters: dict | None = None
|
||||
|
||||
@app.post("/api/v1/jobs/pull")
|
||||
def pull(req: PullReq):
|
||||
return {"job": JOB}
|
||||
|
||||
@app.post("/api/v1/jobs/ack")
|
||||
def ack(job_id: str, node_id: str):
|
||||
return {"ok": True}
|
||||
|
||||
@app.post("/api/v1/jobs/result")
|
||||
def result(job_id: str = Form(...), metadata: str = Form(...), artifact: UploadFile = File(...)):
|
||||
return {"ok": True}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 17) Developer UX (Make Targets)
|
||||
|
||||
```
|
||||
make venv # create venv + install deps
|
||||
make run # run miner with local config
|
||||
make fmt # ruff/black (optional)
|
||||
make test # unit tests
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 18) Operational Runbook
|
||||
|
||||
- **Start/Stop**: `systemctl enable --now aitbc-miner`
|
||||
- **Logs**: `journalctl -u aitbc-miner -f` and `/var/log/aitbc/miner/miner.jsonl`
|
||||
- **Rotate**: logrotate config (size 50M, keep 7)
|
||||
- **Upgrade**: drain → stop → replace venv → start → verify heartbeat
|
||||
- **Health**: `/healthz` 200 + JSON `{running, queued, cpu_load, vram_free}`
|
||||
|
||||
---
|
||||
|
||||
## 19) Failure Modes & Recovery
|
||||
|
||||
- **Network errors**: exponential backoff; keep heartbeat local status.
|
||||
- **Job invalid**: fail fast with reason; do not retry.
|
||||
- **Runner denied**: allowlist miss → fail with `E_DENY`.
|
||||
- **OOM**: kill process group; mark `E_OOM`.
|
||||
- **GPU unavailable**: requeue with reason `E_NOGPU`.
|
||||
|
||||
---
|
||||
|
||||
## 20) Roadmap Notes
|
||||
|
||||
- Binary task bundles with signed SBOM.
|
||||
- Remote cache warming via Coordinator hints.
|
||||
- Multi‑Queue scheduling (latency vs throughput).
|
||||
- MIG/compute‑instance support if hardware allows.
|
||||
|
||||
---
|
||||
|
||||
## 21) Checklist for Windsurf
|
||||
|
||||
1. Create `aitbc_miner/` package skeleton with modules listed in §14.
|
||||
2. Implement config loader + capability probe output.
|
||||
3. Implement async agent loop: register → heartbeat → pull/ack.
|
||||
4. Implement CLI runner with allowlist (§15) and exec log.
|
||||
5. Implement Python runner stub (`tasks/example.py`).
|
||||
6. Write result uploader (multipart) and finalize call.
|
||||
7. Add systemd unit (§6) and basic install script.
|
||||
8. Test end‑to‑end against `mock_coordinator.py` (§16).
|
||||
9. Document log fields + troubleshooting card.
|
||||
10. Add optional `/healthz` endpoint.
|
||||
|
||||
@@ -1,314 +0,0 @@
|
||||
# pool-hub.md — Client ↔ Miners Pool & Matchmaking Gateway (guide for Windsurf)
|
||||
|
||||
> **Role in AITBC**
|
||||
> The **Pool Hub** is the real-time directory of available miners and a low-latency **matchmaker** between **job requests** (coming from `coordinator-api`) and **worker capacity** (from `miner-node`).
|
||||
> It tracks miner capabilities, health, and price; computes a score; and returns the **best N candidates** for each job.
|
||||
|
||||
---
|
||||
|
||||
## 1) MVP Scope (Stage 3 of the boot plan)
|
||||
|
||||
- Accept **miner registrations** + **heartbeats** (capabilities, price, queues).
|
||||
- Maintain an in-memory + persistent **miner registry** with fast lookups.
|
||||
- Provide a **/match** API for `coordinator-api` to request top candidates.
|
||||
- Simple **scoring** with pluggable strategy (latency, VRAM, price, trust).
|
||||
- **No token minting/accounting** in MVP (stub hooks only).
|
||||
- **No Docker**: Debian 12, `uvicorn` service, optional Nginx reverse proxy.
|
||||
|
||||
---
|
||||
|
||||
## 2) High-Level Architecture
|
||||
|
||||
```
|
||||
client-web ──> coordinator-api ──> pool-hub ──> miner-node
|
||||
▲ ▲
|
||||
│ │
|
||||
wallet-daemon heartbeat + capability updates
|
||||
(later) (WebSocket/HTTP)
|
||||
```
|
||||
|
||||
### Responsibilities
|
||||
- **Registry**: who’s online, what they can do, how much it costs.
|
||||
- **Health**: heartbeats, timeouts, grace periods, auto-degrade/remove.
|
||||
- **Matchmaking**: filter → score → return top K miner candidates.
|
||||
- **Observability**: metrics for availability, match latency, rejection reasons.
|
||||
|
||||
---
|
||||
|
||||
## 3) Protocols & Endpoints (FastAPI)
|
||||
|
||||
> Base URL examples:
|
||||
> - Public (optional): `https://pool-hub.example/api`
|
||||
> - Internal (preferred): `http://127.0.0.1:8203` (behind Nginx)
|
||||
|
||||
### 3.1 Miner lifecycle (HTTP + WebSocket)
|
||||
|
||||
- `POST /v1/miners/register`
|
||||
- **Body**: `{ miner_id, api_key, addr, proto, gpu_vram, gpu_name, cpu_cores, ram_gb, price_token_per_ksec, max_parallel, tags[], capabilities[] }`
|
||||
- **Returns**: `{ status, lease_ttl_sec, next_heartbeat_sec }`
|
||||
- Notes: returns a **session_token** (short-lived) if `api_key` valid.
|
||||
|
||||
- `POST /v1/miners/update`
|
||||
- **Body**: `{ session_token, queue_len, busy, current_jobs[], price_token_per_ksec? }`
|
||||
|
||||
- `GET /v1/miners/lease/renew?session_token=...`
|
||||
- Renews online lease (fallback if WS drops).
|
||||
|
||||
- `WS /v1/miners/heartbeat`
|
||||
- **Auth**: `session_token` in query.
|
||||
- **Payload (periodic)**: `{ ts, queue_len, avg_latency_ms?, temp_c?, mem_free_gb? }`
|
||||
- Server may push **commands** (e.g., “update tags”, “set price cap”).
|
||||
|
||||
- `POST /v1/miners/logout`
|
||||
- Clean unregister (otherwise lease expiry removes).
|
||||
|
||||
### 3.2 Coordinator matchmaking (HTTP)
|
||||
|
||||
- `POST /v1/match`
|
||||
- **Body**:
|
||||
```json
|
||||
{
|
||||
"job_id": "uuid",
|
||||
"requirements": {
|
||||
"task": "image_embedding",
|
||||
"min_vram_gb": 8,
|
||||
"min_ram_gb": 8,
|
||||
"accel": ["cuda"],
|
||||
"tags_any": ["eu-west","low-latency"],
|
||||
"capabilities_any": ["sentence-transformers/all-MiniLM-L6-v2"]
|
||||
},
|
||||
"hints": { "region": "eu", "max_price": 0.8, "deadline_ms": 5000 },
|
||||
"top_k": 3
|
||||
}
|
||||
```
|
||||
- **Returns**:
|
||||
```json
|
||||
{
|
||||
"job_id": "uuid",
|
||||
"candidates": [
|
||||
{ "miner_id":"...", "addr":"...", "proto":"grpc", "score":0.87, "eta_ms": 320, "price":0.75 },
|
||||
{ "miner_id":"...", "addr":"...", "proto":"grpc", "score":0.81, "eta_ms": 410, "price":0.65 }
|
||||
],
|
||||
"explain": "cap=1.0 • price=0.8 • latency=0.9 • trust=0.7"
|
||||
}
|
||||
```
|
||||
|
||||
- `POST /v1/feedback`
|
||||
- **Body**: `{ job_id, miner_id, outcome: "accepted"|"rejected"|"failed"|"completed", latency_ms?, fail_code?, tokens_spent? }`
|
||||
- Used to adjust **trust score** & calibration.
|
||||
|
||||
### 3.3 Observability
|
||||
|
||||
- `GET /v1/health` → `{ status:"ok", online_miners, avg_latency_ms }`
|
||||
- `GET /v1/metrics` → Prometheus-style metrics (text)
|
||||
- `GET /v1/miners` (admin-guarded) → paginated registry snapshot
|
||||
|
||||
---
|
||||
|
||||
## 4) Data Model (PostgreSQL minimal)
|
||||
|
||||
> Schema name: `poolhub`
|
||||
|
||||
**tables**
|
||||
- `miners`
|
||||
`miner_id PK, api_key_hash, created_at, last_seen_at, addr, proto, gpu_vram_gb, gpu_name, cpu_cores, ram_gb, max_parallel, base_price, tags jsonb, capabilities jsonb, trust_score float default 0.5, region text`
|
||||
- `miner_status`
|
||||
`miner_id FK, queue_len int, busy bool, avg_latency_ms int, temp_c int, mem_free_gb float, updated_at`
|
||||
- `feedback`
|
||||
`id PK, job_id, miner_id, outcome, latency_ms, fail_code, tokens_spent, created_at`
|
||||
- `price_overrides` (optional)
|
||||
|
||||
**indexes**
|
||||
- `idx_miners_region_caps_gte_vram` (GIN on jsonb + partials)
|
||||
- `idx_status_updated_at`
|
||||
- `idx_feedback_miner_time`
|
||||
|
||||
**in-memory caches**
|
||||
- Hot registry (Redis or local LRU) for sub-millisecond match filter.
|
||||
|
||||
---
|
||||
|
||||
## 5) Matching & Scoring
|
||||
|
||||
**Filter**:
|
||||
- Hard constraints: `min_vram_gb`, `min_ram_gb`, `accel`, `capabilities_any`, `region`, `max_price`, `queue_len < max_parallel`.
|
||||
|
||||
**Score formula (tunable)**:
|
||||
```
|
||||
score = w_cap*cap_fit
|
||||
+ w_price*price_norm
|
||||
+ w_latency*latency_norm
|
||||
+ w_trust*trust_score
|
||||
+ w_load*load_norm
|
||||
```
|
||||
- `cap_fit`: 1 if all required caps present, else <1 proportional to overlap.
|
||||
- `price_norm`: cheaper = closer to 1 (normalized vs request cap).
|
||||
- `latency_norm`: 1 for fastest observed in region, decays by percentile.
|
||||
- `load_norm`: higher if queue_len small and max_parallel large.
|
||||
- Default weights: `w_cap=0.40, w_price=0.20, w_latency=0.20, w_trust=0.15, w_load=0.05`.
|
||||
|
||||
Return top-K with **explain string** for debugging.
|
||||
|
||||
---
|
||||
|
||||
## 6) Security (MVP → later hardening)
|
||||
|
||||
- **Auth**:
|
||||
- Miners: static `api_key` → exchange for short **session_token**.
|
||||
- Coordinator: **shared secret** header (MVP), rotate via env.
|
||||
- **Network**:
|
||||
- Bind to localhost; expose via Nginx with IP allowlist for coordinator.
|
||||
- Rate limits on `/match` and `/register`.
|
||||
- **Later**:
|
||||
- mTLS between pool-hub ↔ miners.
|
||||
- Signed capability manifests.
|
||||
- Attestation (NVIDIA MIG/hash) optional.
|
||||
|
||||
---
|
||||
|
||||
## 7) Configuration
|
||||
|
||||
- `.env` (loaded by FastAPI):
|
||||
- `POOLHUB_BIND=127.0.0.1`
|
||||
- `POOLHUB_PORT=8203`
|
||||
- `DB_DSN=postgresql://poolhub:*****@127.0.0.1:5432/aitbc`
|
||||
- `REDIS_URL=redis://127.0.0.1:6379/4` (optional)
|
||||
- `COORDINATOR_SHARED_SECRET=...`
|
||||
- `SESSION_TTL_SEC=60`
|
||||
- `HEARTBEAT_GRACE_SEC=120`
|
||||
- `DEFAULT_WEIGHTS=cap:0.4,price:0.2,latency:0.2,trust:0.15,load:0.05`
|
||||
|
||||
---
|
||||
|
||||
## 8) Local Dev (Debian 12, no Docker)
|
||||
|
||||
```bash
|
||||
# Python env
|
||||
apt install -y python3-venv python3-pip
|
||||
python3 -m venv .venv && source .venv/bin/activate
|
||||
pip install fastapi uvicorn[standard] pydantic-settings psycopg[binary] orjson redis prometheus-client
|
||||
|
||||
# DB (psql one-liners, adjust to your policy)
|
||||
psql -U postgres -c "CREATE USER poolhub WITH PASSWORD '***';"
|
||||
psql -U postgres -c "CREATE DATABASE aitbc OWNER poolhub;"
|
||||
psql -U postgres -d aitbc -c "CREATE SCHEMA poolhub AUTHORIZATION poolhub;"
|
||||
|
||||
# Run
|
||||
uvicorn app.main:app --host 127.0.0.1 --port 8203 --workers 1 --proxy-headers
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 9) Systemd service (uvicorn)
|
||||
|
||||
`/etc/systemd/system/pool-hub.service`
|
||||
```
|
||||
[Unit]
|
||||
Description=AITBC Pool Hub (FastAPI)
|
||||
After=network-online.target postgresql.service
|
||||
Wants=network-online.target
|
||||
|
||||
[Service]
|
||||
User=games
|
||||
Group=games
|
||||
WorkingDirectory=/var/www/aitbc/pool-hub
|
||||
Environment=PYTHONUNBUFFERED=1
|
||||
EnvironmentFile=/var/www/aitbc/pool-hub/.env
|
||||
ExecStart=/var/www/aitbc/pool-hub/.venv/bin/uvicorn app.main:app --host 127.0.0.1 --port 8203 --workers=1 --proxy-headers
|
||||
Restart=always
|
||||
RestartSec=2
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 10) Optional Nginx (host as reverse proxy)
|
||||
|
||||
```
|
||||
server {
|
||||
listen 443 ssl http2;
|
||||
server_name pool-hub.example;
|
||||
# ssl_certificate ...; ssl_certificate_key ...;
|
||||
|
||||
# Only allow coordinator’s IPs
|
||||
allow 10.0.3.32;
|
||||
allow 127.0.0.1;
|
||||
deny all;
|
||||
|
||||
location / {
|
||||
proxy_pass http://127.0.0.1:8203;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Forwarded-For $remote_addr;
|
||||
proxy_set_header X-Forwarded-Proto https;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 11) FastAPI App Skeleton (files to create)
|
||||
|
||||
```
|
||||
pool-hub/
|
||||
app/
|
||||
__init__.py
|
||||
main.py # FastAPI init, routers include, metrics
|
||||
deps.py # auth, db sessions, rate limits
|
||||
models.py # SQLAlchemy/SQLModel tables (see schema)
|
||||
schemas.py # Pydantic I/O models
|
||||
registry.py # in-memory index + Redis bridge
|
||||
scoring.py # score strategies
|
||||
routers/
|
||||
miners.py # register/update/ws/logout
|
||||
match.py # /match and /feedback
|
||||
admin.py # /miners snapshot (guarded)
|
||||
health.py # /health, /metrics
|
||||
.env.example
|
||||
README.md
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 12) Testing & Validation Checklist
|
||||
|
||||
1. Register 3 miners (mixed VRAM, price, region).
|
||||
2. Heartbeats arrive; stale miners drop after `HEARTBEAT_GRACE_SEC`.
|
||||
3. `/match` with constraints returns consistent top-K and explain string.
|
||||
4. Rate limits kick in (flood `/match` with 50 rps).
|
||||
5. Coordinator feedback adjusts trust score; observe score shifts.
|
||||
6. Systemd restarts on crash; Nginx denies non-coordinator IPs.
|
||||
7. Metrics expose `poolhub_miners_online`, `poolhub_match_latency_ms`.
|
||||
|
||||
---
|
||||
|
||||
## 13) Roadmap (post-MVP)
|
||||
|
||||
- Weighted **multi-dispatch** (split jobs across K miners, merge).
|
||||
- **Price discovery**: spot vs reserved capacity.
|
||||
- **mTLS**, signed manifests, hardware attestation.
|
||||
- **AIToken hooks**: settle/pay via `wallet-daemon` + `blockchain-node`.
|
||||
- Global **region routing** + latency probes.
|
||||
|
||||
---
|
||||
|
||||
## 14) Dev Notes for Windsurf
|
||||
|
||||
- Start with **routers/miners.py** and **routers/match.py**.
|
||||
- Implement `registry.py` with a simple in-proc dict + RWLock; add Redis later.
|
||||
- Keep scoring in **scoring.py** with clear weight constants and unit tests.
|
||||
- Provide a tiny CLI seed to simulate miners for local testing.
|
||||
|
||||
---
|
||||
|
||||
### Open Questions
|
||||
|
||||
- Trust model inputs (what counts as failure vs transient)?
|
||||
- Minimal capability schema vs free-form tags?
|
||||
- Region awareness from IP vs miner-provided claim?
|
||||
|
||||
---
|
||||
|
||||
**Want me to generate the initial FastAPI file stubs for this layout now (yes/no)?**
|
||||
|
||||
@@ -1,335 +0,0 @@
|
||||
# wallet-daemon.md
|
||||
|
||||
> **Role:** Local wallet service (keys, signing, RPC) for the 🤖 AITBC 🤑 stack
|
||||
> **Audience:** Windsurf (programming assistant) + developers
|
||||
> **Stage:** Bootable without blockchain; later pluggable to chain node
|
||||
|
||||
---
|
||||
|
||||
## 1) What this daemon is (and isn’t)
|
||||
|
||||
**Goals**
|
||||
1. Generate/import encrypted wallets (seed or raw keys).
|
||||
2. Derive accounts/addresses (HD, multiple curves).
|
||||
3. Hold keys **locally** and sign messages/transactions/receipts.
|
||||
4. Expose a minimal **RPC** for client/coordinator/miner.
|
||||
5. Provide a mock “ledger view” (balance cache + test transfers) until the chain is ready.
|
||||
|
||||
**Non-Goals (for now)**
|
||||
- No P2P networking.
|
||||
- No full node / block validation.
|
||||
- No remote key export (never leaves box unencrypted).
|
||||
|
||||
---
|
||||
|
||||
## 2) Architecture (minimal, extensible)
|
||||
|
||||
- **Process:** Python FastAPI app (`uvicorn`)
|
||||
- **RPC:** HTTP+JSON (REST) and JSON-RPC (both; same server)
|
||||
- **KeyStore:** on-disk, encrypted with **Argon2id + XChaCha20-Poly1305**
|
||||
- **Curves:** `ed25519` (default), `secp256k1` (optional flag per-account)
|
||||
- **HD Derivation:** BIP-39 seed → SLIP-10 (ed25519), BIP-32 (secp256k1)
|
||||
- **Coin type (provisional):** `AITBC = 12345` (placeholder; replace once registered)
|
||||
- **Auth:** Local-only by default (bind `127.0.0.1`), optional token header for remote.
|
||||
- **Events:** Webhooks + local FIFO/Unix-socket stream for “signed” notifications.
|
||||
- **Mock Ledger (stage-1):** sqlite table for balances & transfers; sync adapter later.
|
||||
|
||||
**Directory layout**
|
||||
```
|
||||
wallet-daemon/
|
||||
├─ app/ # FastAPI service
|
||||
│ ├─ main.py # entrypoint
|
||||
│ ├─ api_rest.py # REST routes
|
||||
│ ├─ api_jsonrpc.py # JSON-RPC methods
|
||||
│ ├─ crypto/ # key, derivation, sign, addr
|
||||
│ ├─ keystore/ # encrypted store backend
|
||||
│ ├─ models/ # pydantic schemas
|
||||
│ ├─ ledger_mock/ # sqlite-backed balances
|
||||
│ └─ settings.py # config
|
||||
├─ data/
|
||||
│ ├─ keystore/ # *.kdb (per wallet)
|
||||
│ └─ ledger.db
|
||||
├─ tests/
|
||||
└─ run.sh
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3) Security model (Windsurf implementation notes)
|
||||
|
||||
- **Passwords:** Argon2id (tuned to machine; env overrides)
|
||||
- `ARGON_TIME=4`, `ARGON_MEMORY=256MB`, `ARGON_PARALLELISM=2` (defaults)
|
||||
- **Encryption:** libsodium `crypto_aead_xchacha20poly1305_ietf`
|
||||
- **At-Rest Format (per wallet file)**
|
||||
```
|
||||
magic=v1-kdb
|
||||
salt=32B
|
||||
argon_params={t,m,p}
|
||||
nonce=24B
|
||||
ciphertext=… # includes seed or master private key; plus metadata JSON
|
||||
```
|
||||
- **In-memory:** zeroize sensitive bytes after use; use `memoryview`/`ctypes` scrubbing where possible.
|
||||
- **API hardening:**
|
||||
- Bind to `127.0.0.1` by default.
|
||||
- Optional `X-Auth-Token: <TOKEN>` for REST/JSON-RPC.
|
||||
- Rate limits on sign endpoints.
|
||||
|
||||
---
|
||||
|
||||
## 4) Key & address strategy
|
||||
|
||||
- **Wallet types**
|
||||
1. **HD (preferred):** BIP-39 mnemonic (12/24 words) → master seed.
|
||||
2. **Single-key:** import raw private key (ed25519/secp256k1).
|
||||
|
||||
- **Derivation paths**
|
||||
- **ed25519 (SLIP-10):** `m / 44' / 12345' / account' / change / index`
|
||||
- **secp256k1 (BIP-44):** `m / 44' / 12345' / account' / change / index`
|
||||
|
||||
- **Address format (temporary)**
|
||||
- **ed25519:** `base32(bech32_hrp="ait")` of `blake2b-20(pubkey)`
|
||||
- **secp256k1:** same hash → bech32; flag curve in metadata
|
||||
|
||||
> Replace HRP and hash rules if the canonical AITBC chain spec differs.
|
||||
|
||||
---
|
||||
|
||||
## 5) REST API (for Windsurf to scaffold)
|
||||
|
||||
**Headers**
|
||||
- Optional: `X-Auth-Token: <token>`
|
||||
- `Content-Type: application/json`
|
||||
|
||||
### 5.1 Wallet lifecycle
|
||||
- `POST /v1/wallet/create`
|
||||
- body: `{ "name": "main", "password": "…", "mnemonic_len": 24 }`
|
||||
- returns: `{ "wallet_id": "…" }`
|
||||
- `POST /v1/wallet/import-mnemonic`
|
||||
- `{ "name":"…","password":"…","mnemonic":"…","passphrase":"" }`
|
||||
- `POST /v1/wallet/import-key`
|
||||
- `{ "name":"…","password":"…","curve":"ed25519|secp256k1","private_key_hex":"…" }`
|
||||
- `POST /v1/wallet/unlock`
|
||||
- `{ "wallet_id":"…","password":"…" }` → unlocks into memory for N seconds (config TTL)
|
||||
- `POST /v1/wallet/lock` → no body
|
||||
- `GET /v1/wallets` → list minimal metadata (never secrets)
|
||||
|
||||
### 5.2 Accounts & addresses
|
||||
- `POST /v1/account/derive`
|
||||
- `{ "wallet_id":"…","curve":"ed25519","path":"m/44'/12345'/0'/0/0" }`
|
||||
- `GET /v1/accounts?wallet_id=…`
|
||||
- `GET /v1/address/{account_id}` → `{ "address":"ait1…", "curve":"ed25519" }`
|
||||
|
||||
### 5.3 Signing
|
||||
- `POST /v1/sign/message`
|
||||
- `{ "account_id":"…","message_base64":"…" }` → `{ "signature_base64":"…" }`
|
||||
- `POST /v1/sign/tx`
|
||||
- `{ "account_id":"…","tx_bytes_base64":"…","type":"aitbc_v0" }` → signature + (optionally) signed blob
|
||||
- `POST /v1/sign/receipt`
|
||||
- `{ "account_id":"…","payload":"…"}`
|
||||
- **Used by coordinator/miner** to sign job receipts & payouts.
|
||||
|
||||
### 5.4 Mock ledger (stage-1 only)
|
||||
- `GET /v1/ledger/balance/{address}`
|
||||
- `POST /v1/ledger/transfer`
|
||||
- `{ "from":"…","to":"…","amount":"123.456","memo":"test" }`
|
||||
- `GET /v1/ledger/tx/{txid}`
|
||||
|
||||
### 5.5 Webhooks
|
||||
- `POST /v1/webhooks`
|
||||
- `{ "url":"https://…/callback", "events":["signed","transfer"] }`
|
||||
|
||||
**Error model**
|
||||
```json
|
||||
{ "error": { "code": "WALLET_LOCKED|NOT_FOUND|BAD_PASSWORD|RATE_LIMIT", "detail": "…" } }
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6) JSON-RPC mirror (method → params)
|
||||
|
||||
- `wallet_create(name, password, mnemonic_len)`
|
||||
- `wallet_import_mnemonic(name, password, mnemonic, passphrase)`
|
||||
- `wallet_import_key(name, password, curve, private_key_hex)`
|
||||
- `wallet_unlock(wallet_id, password)` / `wallet_lock()`
|
||||
- `account_derive(wallet_id, curve, path)`
|
||||
- `sign_message(account_id, message_base64)`
|
||||
- `sign_tx(account_id, tx_bytes_base64, type)`
|
||||
- `ledger_getBalance(address)` / `ledger_transfer(from, to, amount, memo)`
|
||||
|
||||
Same auth header; endpoint `/rpc`.
|
||||
|
||||
---
|
||||
|
||||
## 7) Data schemas (Pydantic hints)
|
||||
|
||||
```py
|
||||
class WalletMeta(BaseModel):
|
||||
wallet_id: str
|
||||
name: str
|
||||
curves: list[str] # ["ed25519", "secp256k1"]
|
||||
created_at: datetime
|
||||
|
||||
class AccountMeta(BaseModel):
|
||||
account_id: str
|
||||
wallet_id: str
|
||||
curve: Literal["ed25519","secp256k1"]
|
||||
path: str # HD path or "imported"
|
||||
address: str # bech32 ait1...
|
||||
|
||||
class SignedResult(BaseModel):
|
||||
signature_base64: str
|
||||
public_key_hex: str
|
||||
algo: str # e.g., ed25519
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 8) Configuration (ENV)
|
||||
|
||||
```
|
||||
WALLET_BIND=127.0.0.1
|
||||
WALLET_PORT=8555
|
||||
WALLET_TOKEN= # optional
|
||||
KEYSTORE_DIR=./data/keystore
|
||||
LEDGER_DB=./data/ledger.db
|
||||
UNLOCK_TTL_SEC=120
|
||||
ARGON_TIME=4
|
||||
ARGON_MEMORY_MB=256
|
||||
ARGON_PARALLELISM=2
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 9) Minimal boot script (dev)
|
||||
|
||||
```bash
|
||||
# run.sh
|
||||
export WALLET_BIND=127.0.0.1
|
||||
export WALLET_PORT=8555
|
||||
export KEYSTORE_DIR=./data/keystore
|
||||
mkdir -p "$KEYSTORE_DIR" data
|
||||
exec uvicorn app.main:app --host ${WALLET_BIND} --port ${WALLET_PORT}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 10) Systemd unit (prod, template)
|
||||
|
||||
```
|
||||
[Unit]
|
||||
Description=AITBC Wallet Daemon
|
||||
After=network.target
|
||||
|
||||
[Service]
|
||||
User=root
|
||||
WorkingDirectory=/opt/aitbc/wallet-daemon
|
||||
Environment=WALLET_BIND=127.0.0.1
|
||||
Environment=WALLET_PORT=8555
|
||||
Environment=KEYSTORE_DIR=/opt/aitbc/wallet-daemon/data/keystore
|
||||
ExecStart=/usr/bin/uvicorn app.main:app --host 127.0.0.1 --port 8555
|
||||
Restart=always
|
||||
NoNewPrivileges=true
|
||||
PrivateTmp=true
|
||||
ProtectSystem=full
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 11) Curl smoke tests
|
||||
|
||||
```bash
|
||||
# 1) Create + unlock
|
||||
curl -s localhost:8555/v1/wallet/create -X POST \
|
||||
-H 'Content-Type: application/json' \
|
||||
-d '{"name":"main","password":"pw","mnemonic_len":24}'
|
||||
|
||||
curl -s localhost:8555/v1/wallets
|
||||
|
||||
# 2) Derive first account
|
||||
curl -s localhost:8555/v1/account/derive -X POST \
|
||||
-H 'Content-Type: application/json' \
|
||||
-d '{"wallet_id":"W1","curve":"ed25519","path":"m/44'/12345'/0'/0/0"}'
|
||||
|
||||
# 3) Sign message
|
||||
curl -s localhost:8555/v1/sign/message -X POST \
|
||||
-H 'Content-Type: application/json' \
|
||||
-d '{"account_id":"A1","message_base64":"aGVsbG8="}'
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 12) Coordinator/miner integration (stage-1)
|
||||
|
||||
**Coordinator needs:**
|
||||
- `GET /v1/address/{account_id}` for payout address lookup.
|
||||
- `POST /v1/sign/receipt` to sign `{job_id, miner_id, result_hash, ts}`.
|
||||
- Optional webhook on `signed` to chain/queue the payout request.
|
||||
|
||||
**Miner needs:**
|
||||
- Local message signing for **proof-of-work-done** receipts.
|
||||
- Optional “ephemeral sub-accounts” derived at `change=1` for per-job audit.
|
||||
|
||||
---
|
||||
|
||||
## 13) Migration path to real chain
|
||||
|
||||
1. Introduce **ChainAdapter** interface:
|
||||
- `get_balance(address)`, `broadcast_tx(signed_tx)`, `fetch_utxos|nonce`, `estimate_fee`.
|
||||
2. Implement `MockAdapter` (current), then `AitbcNodeAdapter` (RPC to real node).
|
||||
3. Swap via `WALLET_CHAIN=mock|aitbc`.
|
||||
|
||||
---
|
||||
|
||||
## 14) Windsurf tasks checklist
|
||||
|
||||
1. Create repo structure (see **Directory layout**).
|
||||
2. Add dependencies: `fastapi`, `uvicorn`, `pydantic`, `argon2-cffi`, `pynacl` (or `libsodium` bindings), `bech32`, `sqlalchemy` (for mock ledger), `aiosqlite`.
|
||||
3. Implement **keystore** (encrypt/decrypt, file IO, metadata).
|
||||
4. Implement **HD derivation** (SLIP-10 for ed25519, BIP-32 for secp256k1).
|
||||
5. Implement **address** helper (hash → bech32 with `ait`).
|
||||
6. Implement **REST** routes, then JSON-RPC shim.
|
||||
7. Implement **mock ledger** with sqlite + simple transfers.
|
||||
8. Add **webhook** delivery (async task + retry).
|
||||
9. Add **rate limits** on signing; add unlock TTL.
|
||||
10. Write unit tests for keystore, derivation, signing.
|
||||
11. Provide `run.sh` + systemd unit.
|
||||
12. Add curl examples to `README`.
|
||||
|
||||
---
|
||||
|
||||
## 15) Open questions (defaults proposed)
|
||||
|
||||
1. Keep both curves or start **ed25519-only**? → **Start ed25519-only**, gate secp256k1 by flag.
|
||||
2. HRP `ait` and hash `blake2b-20` acceptable as interim? → **Yes (interim)**.
|
||||
3. Store **per-account tags** (e.g., “payout”, “ops”)? → **Yes**, simple string map.
|
||||
|
||||
---
|
||||
|
||||
## 16) Threat model notes (MVP)
|
||||
|
||||
- Local compromise = keys at risk ⇒ encourage **passphrase on mnemonic** + **short unlock TTL**.
|
||||
- Add **IPC allowlist** if binding to non-localhost.
|
||||
- Consider **YubiKey/PKCS#11** module later (signing via hardware).
|
||||
|
||||
---
|
||||
|
||||
## 17) Logging
|
||||
|
||||
- Default: INFO without sensitive fields.
|
||||
- Redact: passwords, mnemonics, private keys, nonces.
|
||||
- Correlate by `req_id` header (generate if missing).
|
||||
|
||||
---
|
||||
|
||||
## 18) License / Compliance
|
||||
|
||||
- Keep crypto libs permissive (MIT/ISC/BSD).
|
||||
- Record algorithm choices & versions in `/about` endpoint for audits.
|
||||
|
||||
---
|
||||
|
||||
**Proceed to generate the scaffold (FastAPI app + keystore + ed25519 HD + REST) now?** `y` / `n`
|
||||
|
||||
@@ -1,124 +0,0 @@
|
||||
# Blockchain Node – Task Breakdown
|
||||
|
||||
## Status (2026-01-29)
|
||||
|
||||
- **Stage 1**: ✅ **DEPLOYED** - Blockchain Node successfully deployed on host with RPC API accessible
|
||||
- SQLModel-based blockchain with PoA consensus implemented
|
||||
- RPC API running on ports 8081/8082 (proxied via /rpc/ and /rpc2/)
|
||||
- Mock coordinator on port 8090 (proxied via /v1/)
|
||||
- Devnet scripts and observability hooks implemented
|
||||
- ✅ **NEW**: Transaction-dependent block creation implemented
|
||||
- ✅ **NEW**: Cross-site RPC synchronization implemented
|
||||
- Note: SQLModel/SQLAlchemy compatibility issues remain (low priority)
|
||||
|
||||
## Stage 1 (MVP) - COMPLETED
|
||||
|
||||
- **Project Scaffolding**
|
||||
- ✅ Create `apps/blockchain-node/src/` module layout (`types.py`, `state.py`, `blocks.py`, `mempool.py`, `consensus.py`, `rpc.py`, `p2p.py`, `receipts.py`, `settings.py`).
|
||||
- ✅ Add `requirements.txt` with FastAPI, SQLModel, websockets, orjson, python-dotenv.
|
||||
- ✅ Provide `.env.example` with `CHAIN_ID`, `DB_PATH`, bind addresses, proposer key.
|
||||
|
||||
- **State & Persistence**
|
||||
- ✅ Implement SQLModel tables for blocks, transactions, accounts, receipts, peers, params.
|
||||
- ✅ Set up database initialization and genesis loading.
|
||||
- ✅ Provide migration or reset script under `scripts/`.
|
||||
|
||||
- **RPC Layer**
|
||||
- ✅ Build FastAPI app exposing `/rpc/*` endpoints (sendTx, getTx, getBlock, getHead, getBalance, submitReceipt, metrics).
|
||||
- ✅ Implement admin endpoints for devnet (`mintFaucet`, `paramSet`, `peers/add`).
|
||||
|
||||
- **Consensus & Block Production**
|
||||
- ✅ Implement PoA proposer loop producing blocks at fixed interval.
|
||||
- ✅ Integrate mempool selection, receipt validation, and block broadcasting.
|
||||
- ✅ Add basic P2P gossip (websocket) for blocks/txs.
|
||||
- ✅ **NEW**: Transaction-dependent block creation - only creates blocks when mempool has pending transactions
|
||||
- ✅ **NEW**: HTTP polling mechanism to check RPC mempool size every 2 seconds
|
||||
- ✅ **NEW**: Eliminates empty blocks from blockchain
|
||||
|
||||
- **Cross-Site Synchronization** [NEW]
|
||||
- Multi-site deployment with RPC-based sync
|
||||
- Transaction propagation between sites
|
||||
- ✅ Block synchronization fully implemented (/blocks/import endpoint functional)
|
||||
- Status: Active on all 3 nodes with proper validation
|
||||
- ✅ Enable transaction propagation between sites
|
||||
- ✅ Configure remote endpoints for all nodes (localhost nodes sync with remote)
|
||||
- ✅ Integrate sync module into node lifecycle (start/stop)
|
||||
|
||||
- **Receipts & Minting**
|
||||
- ✅ Wire `receipts.py` to coordinator attestation mock.
|
||||
- Mint tokens to miners based on compute_units with configurable ratios.
|
||||
|
||||
- **Devnet Tooling**
|
||||
- ✅ Provide `scripts/devnet_up.sh` launching bootstrap node and mocks.
|
||||
- Document curl commands for faucet, transfer, receipt submission.
|
||||
|
||||
## Production Deployment Details
|
||||
|
||||
### Multi-Site Deployment
|
||||
- **Site A (localhost)**: 2 nodes (ports 8081, 8082) - https://aitbc.bubuit.net/rpc/ and /rpc2/
|
||||
- **Site B (remote host)**: ns3 server (95.216.198.140)
|
||||
- **Site C (remote container)**: 1 node (port 8082) - http://aitbc.keisanki.net/rpc/
|
||||
- **Service**: systemd services for blockchain-node, blockchain-node-2, blockchain-rpc
|
||||
- **Proxy**: nginx routes /rpc/, /rpc2/, /v1/ to appropriate services
|
||||
- **Database**: SQLite with SQLModel ORM per node
|
||||
- **Network**: Cross-site RPC synchronization enabled
|
||||
|
||||
### Features
|
||||
- Transaction-dependent block creation (prevents empty blocks)
|
||||
- HTTP polling of RPC mempool for transaction detection
|
||||
- Cross-site transaction propagation via RPC polling
|
||||
- Proper transaction storage in block data with tx_count
|
||||
- Redis gossip backend for local transaction sharing
|
||||
|
||||
### Configuration
|
||||
- **Chain ID**: "ait-devnet" (consistent across all sites)
|
||||
- **Block Time**: 2 seconds
|
||||
- **Cross-site sync**: Enabled, 10-second poll interval
|
||||
- **Remote endpoints**: Configured per node for cross-site communication
|
||||
|
||||
### Issues
|
||||
- SQLModel/SQLAlchemy compatibility (low priority)
|
||||
- ✅ Block synchronization fully implemented via /blocks/import endpoint
|
||||
- Nodes maintain independent chains (by design with PoA)
|
||||
|
||||
## Stage 2+ - IN PROGRESS
|
||||
|
||||
- 🔄 Upgrade consensus to compute-backed proof (CBP) with work score weighting.
|
||||
- 🔄 Introduce staking/slashing, replace SQLite with PostgreSQL, add snapshots/fast sync.
|
||||
- 🔄 Implement light client support and metrics dashboard.
|
||||
|
||||
## Recent Updates (2026-01-29)
|
||||
|
||||
### Cross-Site Synchronization Implementation
|
||||
- **Module**: `/src/aitbc_chain/cross_site.py`
|
||||
- **Purpose**: Enable transaction and block propagation between sites via RPC
|
||||
- **Features**:
|
||||
- Polls remote endpoints every 10 seconds
|
||||
- Detects height differences between nodes
|
||||
- Syncs mempool transactions across sites
|
||||
- ✅ Imports blocks between sites via /blocks/import endpoint
|
||||
- Integrated into node lifecycle (starts/stops with node)
|
||||
- **Status**: ✅ Fully deployed and functional on all 3 nodes
|
||||
- **Endpoint**: /blocks/import POST with full transaction support
|
||||
- **Nginx**: Fixed routing to port 8081 for blockchain-rpc-2
|
||||
|
||||
### Configuration Updates
|
||||
```python
|
||||
# Added to ChainSettings in config.py
|
||||
cross_site_sync_enabled: bool = True
|
||||
cross_site_remote_endpoints: list[str] = [
|
||||
"https://aitbc.bubuit.net/rpc2", # Node 2
|
||||
"http://aitbc.keisanki.net/rpc" # Node 3
|
||||
]
|
||||
cross_site_poll_interval: int = 10
|
||||
```
|
||||
|
||||
### Current Node Heights
|
||||
- Local nodes (1 & 2): 771153 (synchronized)
|
||||
- Remote node (3): 40324 (independent chain)
|
||||
|
||||
### Technical Notes
|
||||
- Each node maintains independent blockchain state
|
||||
- Transactions can propagate between sites
|
||||
- Block creation remains local to each node (PoA design)
|
||||
- Network connectivity verified via reverse proxy
|
||||
@@ -1,84 +0,0 @@
|
||||
# Coordinator API – Task Breakdown
|
||||
|
||||
## Status (2026-01-24)
|
||||
|
||||
- **Stage 1 delivery**: ✅ **DEPLOYED** - Coordinator API deployed in production behind https://aitbc.bubuit.net/api/
|
||||
- FastAPI service running in Incus container on port 8000
|
||||
- Health endpoint operational: `/api/v1/health` returns `{"status":"ok","env":"dev"}`
|
||||
- nginx proxy configured at `/api/` (so `/api/v1/*` routes to the container service)
|
||||
- Explorer API available via nginx at `/api/explorer/*` (backend: `/v1/explorer/*`)
|
||||
- Users API available via `/api/v1/users/*` (compat: `/api/users/*` for Exchange)
|
||||
- **Stage 2 delivery**: ✅ **DEPLOYED** - All import and syntax errors fixed (2025-12-28)
|
||||
- Fixed SQLModel import issues across the codebase
|
||||
- Resolved missing module dependencies
|
||||
- Database initialization working correctly with all tables created
|
||||
- **Recent Bug Fixes (2026-01-24)**:
|
||||
- ✅ Fixed missing `_coerce_float()` helper function in receipt service causing 500 errors
|
||||
- ✅ Receipt generation now works correctly for all job completions
|
||||
- ✅ Deployed fix to production incus container via SSH
|
||||
- ✅ Result submission endpoint returns 200 OK with valid receipts
|
||||
- **Testing & tooling**: Pytest suites cover job scheduling, miner flows, and receipt verification; the shared CI script `scripts/ci/run_python_tests.sh` executes these tests in GitHub Actions.
|
||||
- **Documentation**: `docs/run.md` and `apps/coordinator-api/README.md` describe configuration for `RECEIPT_SIGNING_KEY_HEX` and `RECEIPT_ATTESTATION_KEY_HEX` plus the receipt history API.
|
||||
- **Service APIs**: Implemented specific service endpoints for common GPU workloads (Whisper, Stable Diffusion, LLM inference, FFmpeg, Blender) with typed schemas and validation.
|
||||
- **Service Registry**: Created dynamic service registry framework supporting 30+ GPU services across 6 categories (AI/ML, Media Processing, Scientific Computing, Data Analytics, Gaming, Development Tools).
|
||||
|
||||
|
||||
## Stage 1 (MVP) - COMPLETED
|
||||
|
||||
- **Project Setup**
|
||||
- ✅ Initialize FastAPI app under `apps/coordinator-api/src/app/` with `main.py`, `config.py`, `deps.py`.
|
||||
- ✅ Add `.env.example` covering host/port, database URL, API key lists, rate limit configuration.
|
||||
- ✅ Create `pyproject.toml` listing FastAPI, uvicorn, pydantic, SQL driver, httpx, redis (optional).
|
||||
|
||||
- **Models & Persistence**
|
||||
- ✅ Design Pydantic schemas for jobs, miners, constraints, state transitions (`models.py`).
|
||||
- ✅ Implement DB layer (`db.py`) using SQLite (or Postgres) with tables for jobs, miners, sessions, worker sessions.
|
||||
- ✅ Provide migrations or schema creation script.
|
||||
|
||||
- **Business Logic**
|
||||
- ✅ Implement `queue.py` and `matching.py` for job scheduling.
|
||||
- ✅ Create state machine utilities (`states.py`) for job transitions.
|
||||
- ✅ Add settlement stubs in `settlement.py` for future token accounting.
|
||||
|
||||
- **Routers**
|
||||
- ✅ Build `/v1/jobs` endpoints (submit, get status, get result, cancel) with idempotency support.
|
||||
- ✅ Build `/v1/miners` endpoints (register, heartbeat, poll, result, fail, drain).
|
||||
- ✅ Build `/v1/admin` endpoints (stats, job listing, miner listing) with admin auth.
|
||||
- ✅ Build `/v1/services` endpoints for specific GPU workloads:
|
||||
- `/v1/services/whisper/transcribe` - Audio transcription
|
||||
- `/v1/services/stable-diffusion/generate` - Image generation
|
||||
- `/v1/services/llm/inference` - Text generation
|
||||
- `/v1/services/ffmpeg/transcode` - Video transcoding
|
||||
- `/v1/services/blender/render` - 3D rendering
|
||||
- ✅ Build `/v1/registry` endpoints for dynamic service management:
|
||||
- `/v1/registry/services` - List all available services
|
||||
- `/v1/registry/services/{id}` - Get service definition
|
||||
- `/v1/registry/services/{id}/schema` - Get JSON schema
|
||||
- `/v1/registry/services/{id}/requirements` - Get hardware requirements
|
||||
- Optionally add WebSocket endpoints under `ws/` for streaming updates.
|
||||
- **Receipts & Attestations**
|
||||
- ✅ Persist signed receipts (latest + history), expose `/v1/jobs/{job_id}/receipt(s)` endpoints, and attach optional coordinator attestations when `RECEIPT_ATTESTATION_KEY_HEX` is configured.
|
||||
|
||||
- **Auth & Rate Limiting**
|
||||
- ✅ Implement dependencies in `deps.py` to validate API keys and optional HMAC signatures.
|
||||
- ✅ Add rate limiting (e.g., `slowapi`) per key.
|
||||
|
||||
- **Testing & Examples**
|
||||
- ✅ Create `.http` files or pytest suites for client/miner flows.
|
||||
- ✅ Document curl examples and quickstart instructions in `apps/coordinator-api/README.md`.
|
||||
|
||||
## Production Deployment Details
|
||||
|
||||
- **Container**: Incus container 'aitbc' at `/opt/coordinator-api/`
|
||||
- **Service**: systemd service `coordinator-api.service` enabled and running
|
||||
- **Port**: 8000 (internal), proxied via nginx at `/api/` (including `/api/v1/*`)
|
||||
- **Dependencies**: Virtual environment with FastAPI, uvicorn, pydantic installed
|
||||
- **Access**: https://aitbc.bubuit.net/api/v1/health for health check
|
||||
- **Note**: Explorer + Users routes are enabled in production (see `/api/explorer/*` and `/api/v1/users/*`).
|
||||
|
||||
## Stage 2+ - IN PROGRESS
|
||||
|
||||
- 🔄 Integrate with blockchain receipts for settlement triggers.
|
||||
- 🔄 Add Redis-backed queues for scalability.
|
||||
- 🔄 Implement metrics and tracing (Prometheus/OpenTelemetry).
|
||||
- 🔄 Support multi-region coordinators with pool hub integration.
|
||||
@@ -1,59 +0,0 @@
|
||||
# Explorer Web – Task Breakdown
|
||||
|
||||
## Status (2025-12-30)
|
||||
|
||||
- **Stage 1**: ✅ **DEPLOYED** - Explorer Web successfully deployed in production at https://aitbc.bubuit.net/explorer/
|
||||
- All pages implemented with mock data integration, responsive design, and live data toggle
|
||||
- Genesis block (height 0) properly displayed
|
||||
- Mock/live data toggle functional
|
||||
- nginx proxy configured at `/explorer/` route
|
||||
- **Stage 2**: ✅ Completed - Live mode validated against coordinator endpoints with Playwright e2e tests.
|
||||
- **Stage 3**: ✅ Completed - JavaScript error fixes deployed (2025-12-30)
|
||||
- Fixed "can't access property 'length', t is undefined" error on page load
|
||||
- Updated fetchMock function to return correct data structure
|
||||
- Added defensive null checks across all page init functions
|
||||
|
||||
## Stage 1 (MVP) - COMPLETED
|
||||
|
||||
- **Structure & Assets**
|
||||
- ✅ Populate `apps/explorer-web/public/` with `index.html` and all page scaffolds.
|
||||
- ✅ Add base stylesheets (`public/css/base.css`, `public/css/layout.css`, `public/css/theme.css`).
|
||||
- ✅ Include logo and icon assets under `public/assets/`.
|
||||
|
||||
- **TypeScript Modules**
|
||||
- ✅ Provide configuration and data helpers (`src/config.ts`, `src/lib/mockData.ts`, `src/lib/models.ts`).
|
||||
- ✅ Add shared store/utilities module for cross-page state.
|
||||
- ✅ Implement core page controllers and components under `src/pages/` and `src/components/` (overview, blocks, transactions, addresses, receipts, header/footer, data mode toggle).
|
||||
|
||||
- **Mock Data**
|
||||
- ✅ Provide mock JSON fixtures under `public/mock/`.
|
||||
- ✅ Enable mock/live mode toggle via `getDataMode()` and `<data-mode-toggle>` components.
|
||||
|
||||
- **Interaction & UX**
|
||||
- ✅ Implement search box detection for block numbers, hashes, and addresses.
|
||||
- ✅ Add pagination or infinite scroll for block and transaction tables.
|
||||
- ✅ Expand responsive polish beyond overview cards (tablet/mobile grid, table hover states).
|
||||
|
||||
- **Live Mode Integration**
|
||||
- ✅ Hit live coordinator endpoints via nginx (`/api/explorer/blocks`, `/api/explorer/transactions`, `/api/explorer/addresses`, `/api/explorer/receipts`) via `getDataMode() === "live"`.
|
||||
- ✅ Add fallbacks + error surfacing for partial/failed live responses.
|
||||
- ✅ Implement Playwright e2e tests for live mode functionality.
|
||||
|
||||
- **Documentation**
|
||||
- ✅ Update `apps/explorer-web/README.md` with build/run instructions and API assumptions.
|
||||
- ✅ Capture coordinator API + CORS considerations in README deployment notes.
|
||||
|
||||
## Production Deployment Details
|
||||
|
||||
- **Container**: Incus container 'aitbc' at `/var/www/aitbc.bubuit.net/explorer/`
|
||||
- **Build**: Vite + TypeScript build process
|
||||
- **Port**: Static files served by nginx
|
||||
- **Access**: https://aitbc.bubuit.net/explorer/
|
||||
- **Features**: Genesis block display, mock/live toggle, responsive design
|
||||
- **Mock Data**: Blocks.json with proper `{items: [...]}` structure
|
||||
|
||||
## Stage 2+ - IN PROGRESS
|
||||
|
||||
- 🔄 Integrate WebSocket streams for live head and mempool updates.
|
||||
- 🔄 Add token balances and ABI decoding when supported by blockchain node.
|
||||
- 🔄 Provide export-to-CSV functionality and light/dark theme toggle.
|
||||
@@ -1,60 +0,0 @@
|
||||
# Marketplace Web – Task Breakdown
|
||||
|
||||
## Status (2025-12-30)
|
||||
|
||||
- **Stage 1**: ✅ **DEPLOYED** - Marketplace Web successfully deployed in production at https://aitbc.bubuit.net/marketplace/
|
||||
- Vite + TypeScript project with API layer, auth scaffolding, and mock/live data toggle
|
||||
- Offer list, bid form, stats cards implemented
|
||||
- Mock data fixtures with API abstraction
|
||||
- nginx proxy configured at `/marketplace/` route
|
||||
- **Stage 2**: ✅ Completed - Connected to coordinator endpoints with feature flags for live mode rollout.
|
||||
|
||||
## Stage 1 (MVP) - COMPLETED
|
||||
|
||||
- **Project Initialization**
|
||||
- ✅ Scaffold Vite + TypeScript project under `apps/marketplace-web/`.
|
||||
- ✅ Define `package.json`, `tsconfig.json`, `vite.config.ts`, and `.env.example` with `VITE_API_BASE`, `VITE_FEATURE_WALLET`.
|
||||
- ✅ Configure ESLint/Prettier presets.
|
||||
|
||||
- **API Layer**
|
||||
- ✅ Implement `src/api/http.ts` for base fetch wrapper with mock vs real toggle.
|
||||
- ✅ Create `src/api/marketplace.ts` with typed functions for offers, bids, stats, wallet.
|
||||
- ✅ Provide mock JSON files under `public/mock/` for development.
|
||||
|
||||
- **State Management**
|
||||
- ✅ Implement lightweight store in `src/lib/api.ts` with pub/sub and caching.
|
||||
- ✅ Define shared TypeScript interfaces in `src/lib/types.ts`.
|
||||
|
||||
- **Views & Components**
|
||||
- ✅ Build router in `src/main.ts` and bootstrap application.
|
||||
- ✅ Implement views: offer list, bid form, stats cards.
|
||||
- ✅ Create components with validation and responsive design.
|
||||
- ✅ Add filters (region, hardware, price, latency).
|
||||
|
||||
- **Styling & UX**
|
||||
- ✅ Create CSS system implementing design and responsive layout.
|
||||
- ✅ Ensure accessibility: semantic HTML, focus states, keyboard navigation.
|
||||
- ✅ Add toast notifications and form validation messaging.
|
||||
|
||||
- **Authentication**
|
||||
- ✅ Implement auth/session scaffolding in `src/lib/auth.ts`.
|
||||
- ✅ Add feature flags for marketplace actions.
|
||||
|
||||
- **Documentation**
|
||||
- ✅ Update `apps/marketplace-web/README.md` with instructions for dev/build, mock API usage, and configuration.
|
||||
|
||||
## Production Deployment Details
|
||||
|
||||
- **Container**: Incus container 'aitbc' at `/var/www/aitbc.bubuit.net/marketplace/`
|
||||
- **Build**: Vite + TypeScript build process
|
||||
- **Port**: Static files served by nginx
|
||||
- **Access**: https://aitbc.bubuit.net/marketplace/
|
||||
- **Features**: Offer list, bid form, stats cards, responsive design
|
||||
- **Mock Data**: JSON fixtures in `public/mock/` directory
|
||||
|
||||
## Stage 2+ - IN PROGRESS
|
||||
|
||||
- 🔄 Integrate real coordinator/pool hub endpoints and authentication.
|
||||
- 🔄 Add WebSocket updates for live offer/pricing changes.
|
||||
- 🔄 Implement i18n support with dictionaries in `public/i18n/`.
|
||||
- 🔄 Add Vitest test suite for utilities and API modules.
|
||||
@@ -1,42 +0,0 @@
|
||||
# Miner (Host Ops) – Task Breakdown
|
||||
|
||||
## Status (2025-12-22)
|
||||
|
||||
- **Stage 1**: ✅ **IMPLEMENTED** - Infrastructure scripts and runtime behavior validated through `apps/miner-node/` control loop; host installer/systemd automation implemented.
|
||||
|
||||
## Stage 1 (MVP) - COMPLETED
|
||||
|
||||
- **Installer & Scripts**
|
||||
- ✅ Finalize `/root/scripts/aitbc-miner/install_miner.sh` to install dependencies, create venv, deploy systemd unit.
|
||||
- ✅ Implement `/root/scripts/aitbc-miner/miner.sh` main loop (poll, run job, submit proof) as per bootstrap spec.
|
||||
- ✅ Ensure scripts detect GPU availability and switch between CUDA/CPU modes.
|
||||
|
||||
- **Configuration**
|
||||
- ✅ Define `/etc/aitbc/miner.conf` with environment-style keys (COORD_URL, WALLET_ADDR, API_KEY, MINER_ID, WORK_DIR, intervals).
|
||||
- ✅ Document configuration editing steps and permission requirements.
|
||||
|
||||
- **Systemd & Logging**
|
||||
- ✅ Install `aitbc-miner.service` unit with restart policy, log path, and hardening flags.
|
||||
- ✅ Provide optional logrotate config under `configs/systemd/` or `configs/security/`.
|
||||
|
||||
- **Mock Coordinator Integration**
|
||||
- ✅ Supply FastAPI mock coordinator (`mock_coordinator.py`) for local smoke testing.
|
||||
- ✅ Document curl or httpie commands to validate miner registration and proof submission.
|
||||
|
||||
- **Documentation**
|
||||
- ✅ Update `apps/miner-node/README.md` (ops section) and create runbooks under `docs/runbooks/` once available.
|
||||
- ✅ Add troubleshooting steps (GPU check, heartbeat failures, log locations).
|
||||
|
||||
## Implementation Status
|
||||
|
||||
- **Location**: `/root/scripts/aitbc-miner/` and `apps/miner-node/`
|
||||
- **Features**: Installer scripts, systemd service, configuration management
|
||||
- **Runtime**: Poll, execute jobs, submit proofs with GPU/CPU detection
|
||||
- **Integration**: Mock coordinator for local testing
|
||||
- **Deployment**: Ready for host deployment with systemd automation
|
||||
|
||||
## Stage 2+ - IN PROGRESS
|
||||
|
||||
- 🔄 Harden systemd service with `ProtectSystem`, `ProtectHome`, `NoNewPrivileges` and consider non-root user.
|
||||
- 🔄 Add metrics integration (Prometheus exporters, GPU telemetry).
|
||||
- 🔄 Automate zero-downtime updates with rolling restart instructions.
|
||||
@@ -1,80 +0,0 @@
|
||||
# Miner Node – Task Breakdown
|
||||
|
||||
## Status (2026-01-24)
|
||||
|
||||
- **Stage 1**: ✅ **IMPLEMENTED** - Core miner package (`apps/miner-node/src/aitbc_miner/`) provides registration, heartbeat, polling, and result submission flows with CLI/Python runners. Basic telemetry and tests exist.
|
||||
- **Host GPU Miner**: ✅ **DEPLOYED** - Real GPU miner (`gpu_miner_host.py`) running on host with RTX 4060 Ti, Ollama integration, and systemd service. Successfully processes jobs and generates receipts with payment amounts.
|
||||
|
||||
## Recent Updates (2026-01-24)
|
||||
|
||||
### Host GPU Miner Deployment
|
||||
- ✅ Deployed real GPU miner on host with NVIDIA RTX 4060 Ti (16GB)
|
||||
- ✅ Integrated Ollama for LLM inference across 13+ models
|
||||
- ✅ Configured systemd service (`aitbc-host-gpu-miner.service`)
|
||||
- ✅ Fixed miner ID configuration (${MINER_API_KEY})
|
||||
- ✅ Enhanced logging with flush handlers for systemd journal visibility
|
||||
- ✅ Verified end-to-end workflow: job polling → Ollama inference → result submission → receipt generation
|
||||
|
||||
### Performance Metrics
|
||||
- Processing time: ~11-25 seconds per inference job
|
||||
- GPU utilization: 7-20% during processing
|
||||
- Token processing: 200+ tokens per job
|
||||
- Payment calculation: 11.846 gpu_seconds @ 0.02 AITBC = 0.23692 AITBC
|
||||
- Receipt signature: Ed25519 cryptographic signing
|
||||
|
||||
### Integration Points
|
||||
- Coordinator API: http://127.0.0.1:18000 (via Incus proxy)
|
||||
- Miner ID: ${MINER_API_KEY}
|
||||
- Heartbeat interval: 15 seconds
|
||||
- Job polling: 3-second intervals
|
||||
- Result submission: JSON with metrics and execution details
|
||||
|
||||
|
||||
## Stage 1 (MVP) - COMPLETED
|
||||
|
||||
- **Package Skeleton**
|
||||
- ✅ Create Python package `aitbc_miner` with modules: `main.py`, `config.py`, `agent.py`, `probe.py`, `queue.py`, `runners/cli.py`, `runners/python.py`, `util/{fs.py, limits.py, log.py}`.
|
||||
- ✅ Add `pyproject.toml` or `requirements.txt` listing httpx, pydantic, pyyaml, psutil, uvloop (optional).
|
||||
|
||||
- **Configuration & Loading**
|
||||
- ✅ Implement YAML config parser supporting environment overrides (auth token, coordinator URL, heartbeat intervals, resource limits).
|
||||
- ✅ Provide `.env.example` or sample `config.yaml` in `apps/miner-node/`.
|
||||
|
||||
- **Capability Probe**
|
||||
- ✅ Collect CPU cores, memory, disk space, GPU info (nvidia-smi), runner availability.
|
||||
- ✅ Send capability payload to coordinator upon registration.
|
||||
|
||||
- **Agent Control Loop**
|
||||
- ✅ Implement async tasks for registration, heartbeat with backoff, job pulling/acking, job execution, result upload.
|
||||
- ✅ Manage workspace directories under `/var/lib/aitbc/miner/jobs/<job-id>/` with state persistence for crash recovery.
|
||||
|
||||
- **Runners**
|
||||
- ✅ CLI runner validating commands against allowlist definitions (`/etc/aitbc/miner/allowlist.d/`).
|
||||
- ✅ Python runner importing trusted modules from configured paths.
|
||||
- ✅ Enforce resource limits (nice, ionice, ulimit) and capture logs/metrics.
|
||||
|
||||
- **Result Handling**
|
||||
- ✅ Implement artifact upload via multipart requests and finalize job state with coordinator.
|
||||
- ✅ Support failure reporting with detailed error codes (E_DENY, E_OOM, E_TIMEOUT, etc.).
|
||||
|
||||
- **Telemetry & Health**
|
||||
- ✅ Emit structured JSON logs; optionally expose `/healthz` endpoint.
|
||||
- ✅ Track metrics: running jobs, queue length, VRAM free, CPU load.
|
||||
|
||||
- **Testing**
|
||||
- ✅ Provide unit tests for config loader, allowlist validator, capability probe.
|
||||
- ✅ Add integration test hitting `mock_coordinator.py` from bootstrap docs.
|
||||
|
||||
## Implementation Status
|
||||
|
||||
- **Location**: `apps/miner-node/src/aitbc_miner/`
|
||||
- **Features**: Registration, heartbeat, job polling, result submission
|
||||
- **Runners**: CLI and Python runners with allowlist validation
|
||||
- **Resource Management**: CPU, memory, disk, GPU monitoring
|
||||
- **Deployment**: Ready for deployment with coordinator integration
|
||||
|
||||
## Stage 2+ - IN PROGRESS
|
||||
|
||||
- 🔄 Implement multi-slot scheduling (GPU vs CPU) with cgroup integration.
|
||||
- 🔄 Add Redis-backed queue for job retries and persistent metrics export.
|
||||
- 🔄 Support secure secret handling (tmpfs, hardware tokens) and network egress policies.
|
||||
@@ -1,64 +0,0 @@
|
||||
# Pool Hub – Task Breakdown
|
||||
|
||||
## Status (2025-12-22)
|
||||
|
||||
- **Stage 1**: ✅ **IMPLEMENTED** - FastAPI service implemented with miner registry, scoring engine, and Redis/PostgreSQL backing stores. Service configuration API and UI added for GPU providers to select which services to offer.
|
||||
- **Service Configuration**: ✅ Implemented dynamic service configuration allowing miners to enable/disable specific GPU services, set pricing, and define capabilities.
|
||||
|
||||
## Stage 1 (MVP) - COMPLETED
|
||||
|
||||
- **Project Setup**
|
||||
- ✅ Initialize FastAPI project under `apps/pool-hub/src/app/` with `main.py`, `deps.py`, `registry.py`, `scoring.py`, and router modules (`miners.py`, `match.py`, `admin.py`, `health.py`).
|
||||
- ✅ Add `.env.example` defining bind host/port, DB DSN, Redis URL, coordinator shared secret, session TTLs.
|
||||
- ✅ Configure dependencies: FastAPI, uvicorn, pydantic-settings, SQLAlchemy/SQLModel, psycopg (or sqlite), redis, prometheus-client.
|
||||
|
||||
- **Data Layer**
|
||||
- ✅ Implement PostgreSQL schema for miners, miner status, feedback, price overrides as outlined in bootstrap doc.
|
||||
- ✅ Provide migrations or DDL scripts under `apps/pool-hub/migrations/`.
|
||||
|
||||
- **Registry & Scoring**
|
||||
- ✅ Build in-memory registry (with optional Redis backing) storing miner capabilities, health, and pricing.
|
||||
- ✅ Implement scoring function weighing capability fit, price, latency, trust, and load.
|
||||
|
||||
- **API Endpoints**
|
||||
- ✅ `POST /v1/miners/register` exchanging API key for session token, storing capability profile.
|
||||
- ✅ `POST /v1/miners/update` and `WS /v1/miners/heartbeat` for status updates.
|
||||
- ✅ `POST /v1/match` returning top K candidates for coordinator requests with explain string.
|
||||
- ✅ `POST /v1/feedback` to adjust trust and metrics.
|
||||
- ✅ `GET /v1/health` and `GET /v1/metrics` for observability.
|
||||
- ✅ Service Configuration endpoints:
|
||||
- `GET /v1/services/` - List all service configurations for miner
|
||||
- `GET /v1/services/{type}` - Get specific service configuration
|
||||
- `POST /v1/services/{type}` - Create/update service configuration
|
||||
- `PATCH /v1/services/{type}` - Partial update
|
||||
- `DELETE /v1/services/{type}` - Delete configuration
|
||||
- `GET /v1/services/templates/{type}` - Get default templates
|
||||
- `POST /v1/services/validate/{type}` - Validate against hardware
|
||||
- ✅ UI endpoint:
|
||||
- `GET /services` - Service configuration web interface
|
||||
- ✅ Optional admin listing endpoint guarded by shared secret.
|
||||
|
||||
- **Rate Limiting & Security**
|
||||
- ✅ Enforce coordinator shared secret on `/v1/match`.
|
||||
- ✅ Add rate limits to registration and match endpoints.
|
||||
- ✅ Consider IP allowlist and TLS termination guidance.
|
||||
|
||||
- **Testing & Tooling**
|
||||
- ✅ Unit tests for scoring module, registry updates, and feedback adjustments.
|
||||
- ✅ Integration test simulating miners registering, updating, and matching.
|
||||
- ✅ Provide CLI scripts to seed mock miners for development.
|
||||
|
||||
## Implementation Status
|
||||
|
||||
- **Location**: `apps/pool-hub/src/app/`
|
||||
- **Features**: Miner registry, scoring engine, service configuration, UI
|
||||
- **Database**: PostgreSQL with Redis backing
|
||||
- **API**: REST endpoints with WebSocket heartbeat support
|
||||
- **Security**: Coordinator shared secret, rate limiting
|
||||
- **Deployment**: Ready for deployment with systemd service
|
||||
|
||||
## Stage 2+ - IN PROGRESS
|
||||
|
||||
- 🔄 Introduce WebSocket streaming of match suggestions and commands.
|
||||
- 🔄 Add redis-based lease management, multi-region routing, and attested capability manifests.
|
||||
- 🔄 Integrate marketplace pricing data and blockchain settlement hooks.
|
||||
@@ -1,258 +0,0 @@
|
||||
# Trade Exchange Documentation
|
||||
|
||||
## Overview
|
||||
|
||||
The AITBC Trade Exchange is a web platform that allows users to buy AITBC tokens using Bitcoin. It features a modern, responsive interface with user authentication, wallet management, and real-time trading capabilities.
|
||||
|
||||
## Features
|
||||
|
||||
### Bitcoin Wallet Integration
|
||||
- **Payment Gateway**: Buy AITBC tokens with Bitcoin
|
||||
- **QR Code Support**: Mobile-friendly payment QR codes
|
||||
- **Real-time Monitoring**: Automatic payment confirmation tracking
|
||||
- **Exchange Rate**: 1 BTC = 100,000 AITBC (configurable)
|
||||
|
||||
### User Management
|
||||
- **Wallet-based Authentication**: No passwords required
|
||||
- **Individual Accounts**: Each user has a unique wallet and balance
|
||||
- **Session Security**: 24-hour token-based sessions
|
||||
- **Profile Management**: View transaction history and account details
|
||||
|
||||
### Trading Interface
|
||||
- **Live Prices**: Real-time exchange rate updates
|
||||
- **Payment Tracking**: Monitor Bitcoin payments and AITBC credits
|
||||
- **Transaction History**: Complete record of all trades
|
||||
- **Mobile Responsive**: Works on all devices
|
||||
|
||||
## Getting Started
|
||||
|
||||
### 1. Access the Exchange
|
||||
Visit: https://aitbc.bubuit.net/Exchange/
|
||||
|
||||
### 2. Connect Your Wallet
|
||||
1. Click "Connect Wallet" in the navigation
|
||||
2. A unique wallet address is generated
|
||||
3. Your user account is created automatically
|
||||
|
||||
### 3. Buy AITBC Tokens
|
||||
1. Navigate to the Trade section
|
||||
2. Enter the amount of AITBC you want to buy
|
||||
3. The Bitcoin equivalent is calculated
|
||||
4. Click "Create Payment Request"
|
||||
5. Send Bitcoin to the provided address
|
||||
6. Wait for confirmation (1 confirmation needed)
|
||||
7. AITBC tokens are credited to your wallet
|
||||
|
||||
## API Reference
|
||||
|
||||
### User Management
|
||||
|
||||
#### Login/Register
|
||||
```http
|
||||
POST /api/users/login
|
||||
{
|
||||
"wallet_address": "aitbc1abc123..."
|
||||
}
|
||||
```
|
||||
|
||||
Canonical route (same backend, without compatibility proxy):
|
||||
```http
|
||||
POST /api/v1/users/login
|
||||
{
|
||||
"wallet_address": "aitbc1abc123..."
|
||||
}
|
||||
```
|
||||
|
||||
#### Get User Profile
|
||||
```http
|
||||
GET /api/users/me
|
||||
Headers: X-Session-Token: <token>
|
||||
```
|
||||
|
||||
Canonical route:
|
||||
```http
|
||||
GET /api/v1/users/users/me
|
||||
Headers: X-Session-Token: <token>
|
||||
```
|
||||
|
||||
#### Get User Balance
|
||||
```http
|
||||
GET /api/users/{user_id}/balance
|
||||
Headers: X-Session-Token: <token>
|
||||
```
|
||||
|
||||
Canonical route:
|
||||
```http
|
||||
GET /api/v1/users/users/{user_id}/balance
|
||||
Headers: X-Session-Token: <token>
|
||||
```
|
||||
|
||||
#### Logout
|
||||
```http
|
||||
POST /api/users/logout
|
||||
Headers: X-Session-Token: <token>
|
||||
```
|
||||
|
||||
Canonical route:
|
||||
```http
|
||||
POST /api/v1/users/logout
|
||||
Headers: X-Session-Token: <token>
|
||||
```
|
||||
|
||||
### Exchange Operations
|
||||
|
||||
#### Create Payment Request
|
||||
```http
|
||||
POST /api/exchange/create-payment
|
||||
{
|
||||
"user_id": "uuid",
|
||||
"aitbc_amount": 1000,
|
||||
"btc_amount": 0.01
|
||||
}
|
||||
Headers: X-Session-Token: <token>
|
||||
```
|
||||
|
||||
#### Check Payment Status
|
||||
```http
|
||||
GET /api/exchange/payment-status/{payment_id}
|
||||
```
|
||||
|
||||
#### Get Exchange Rates
|
||||
```http
|
||||
GET /api/exchange/rates
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
### Bitcoin Settings
|
||||
- **Network**: Bitcoin Testnet (for demo)
|
||||
- **Confirmations Required**: 1
|
||||
- **Payment Timeout**: 1 hour
|
||||
- **Main Address**: tb1qxy2kgdygjrsqtzq2n0yrf2493p83kkfjhx0wlh
|
||||
|
||||
### Exchange Settings
|
||||
- **Rate**: 1 BTC = 100,000 AITBC
|
||||
- **Fee**: 0.5% transaction fee
|
||||
- **Min Payment**: 0.0001 BTC
|
||||
- **Max Payment**: 10 BTC
|
||||
|
||||
## Security
|
||||
|
||||
### Authentication
|
||||
- **Session Tokens**: SHA-256 hashed tokens
|
||||
- **Expiry**: 24 hours automatic timeout
|
||||
- **Storage**: Server-side session management
|
||||
|
||||
### Privacy
|
||||
- **User Isolation**: Each user has private data
|
||||
- **No Tracking**: No personal data collected
|
||||
- **GDPR Compliant**: Minimal data retention
|
||||
|
||||
## Development
|
||||
|
||||
### Frontend Stack
|
||||
- **HTML5**: Semantic markup
|
||||
- **CSS3**: TailwindCSS for styling
|
||||
- **JavaScript**: Vanilla JS with Axios
|
||||
- **Lucide Icons**: Modern icon library
|
||||
|
||||
### Backend Stack
|
||||
- **FastAPI**: Python web framework
|
||||
- **SQLModel**: Database ORM
|
||||
- **SQLite**: Development database
|
||||
- **Pydantic**: Data validation
|
||||
|
||||
### File Structure
|
||||
```
|
||||
apps/trade-exchange/
|
||||
├── index.html # Main application
|
||||
├── bitcoin-wallet.py # Bitcoin integration
|
||||
└── README.md # Setup instructions
|
||||
|
||||
apps/coordinator-api/src/app/
|
||||
├── routers/
|
||||
│ ├── users.py # User management
|
||||
│ └── exchange.py # Exchange operations
|
||||
├── domain/
|
||||
│ └── user.py # User models
|
||||
└── schemas.py # API schemas
|
||||
```
|
||||
|
||||
## Deployment
|
||||
|
||||
### Production
|
||||
- **URL**: https://aitbc.bubuit.net/Exchange/
|
||||
- **SSL**: Fully configured
|
||||
- **CDN**: Nginx static serving
|
||||
- **API**: /api/v1/* endpoints
|
||||
|
||||
### Environment Variables
|
||||
```bash
|
||||
BITCOIN_TESTNET=true
|
||||
BITCOIN_ADDRESS=tb1q...
|
||||
BTC_TO_AITBC_RATE=100000
|
||||
MIN_CONFIRMATIONS=1
|
||||
```
|
||||
|
||||
## Testing
|
||||
|
||||
### Testnet Bitcoin
|
||||
Get free testnet Bitcoin from:
|
||||
- https://testnet-faucet.mempool.co/
|
||||
- https://coinfaucet.eu/en/btc-testnet/
|
||||
|
||||
### Demo Mode
|
||||
- No real Bitcoin required
|
||||
- Simulated payments for testing
|
||||
- Auto-generated wallet addresses
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
**Payment Not Showing**
|
||||
- Check transaction has 1 confirmation
|
||||
- Verify correct amount sent
|
||||
- Refresh the page
|
||||
|
||||
**Can't Connect Wallet**
|
||||
- Check JavaScript is enabled
|
||||
- Clear browser cache
|
||||
- Try a different browser
|
||||
|
||||
**Balance Incorrect**
|
||||
- Wait for blockchain sync
|
||||
- Check transaction history
|
||||
- Contact support
|
||||
|
||||
### Logs
|
||||
Check application logs:
|
||||
```bash
|
||||
journalctl -u aitbc-coordinator -f
|
||||
```
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
### Planned Features
|
||||
- [ ] MetaMask wallet support
|
||||
- [ ] Advanced trading charts
|
||||
- [ ] Limit orders
|
||||
- [ ] Mobile app
|
||||
- [ ] Multi-currency support
|
||||
|
||||
### Technical Improvements
|
||||
- [ ] Redis session storage
|
||||
- [ ] PostgreSQL database
|
||||
- [ ] Microservices architecture
|
||||
- [ ] WebSocket real-time updates
|
||||
|
||||
## Support
|
||||
|
||||
For help or questions:
|
||||
- **Documentation**: https://aitbc.bubuit.net/docs/
|
||||
- **API Docs**: https://aitbc.bubuit.net/api/docs
|
||||
- **Admin Panel**: https://aitbc.bubuit.net/admin/stats
|
||||
|
||||
## License
|
||||
|
||||
This project is part of the AITBC ecosystem. See the main repository for license information.
|
||||
@@ -1,53 +0,0 @@
|
||||
# Wallet Daemon – Task Breakdown
|
||||
|
||||
## Status (2025-12-22)
|
||||
|
||||
- **Stage 1**: ✅ **DEPLOYED** - Wallet Daemon successfully deployed in production at https://aitbc.bubuit.net/wallet/
|
||||
- FastAPI application running in Incus container on port 8002
|
||||
- Encrypted keystore with Argon2id + XChaCha20-Poly1305 implemented
|
||||
- REST and JSON-RPC APIs operational
|
||||
- Mock ledger with SQLite backend functional
|
||||
- Receipt verification using aitbc_sdk integrated
|
||||
- nginx proxy configured at /wallet/ route
|
||||
|
||||
## Stage 1 (MVP) - COMPLETED
|
||||
|
||||
- **Project Setup**
|
||||
- ✅ Initialize FastAPI application under `apps/wallet-daemon/src/app/` with `main.py`, `settings.py`, `api_rest.py`, `api_jsonrpc.py`.
|
||||
- ✅ Create crypto and keystore modules implementing Argon2id key derivation and XChaCha20-Poly1305 encryption.
|
||||
- ✅ Add dependencies: FastAPI, uvicorn, argon2-cffi, pynacl, aitbc-sdk, aitbc-crypto, pydantic-settings.
|
||||
|
||||
- **Keystore & Security**
|
||||
- ✅ Implement encrypted wallet file format storing metadata, salt, nonce, ciphertext.
|
||||
- ✅ Provide REST endpoints to create/import wallets, unlock/lock, derive accounts.
|
||||
- ✅ Enforce unlock TTL and in-memory zeroization of sensitive data.
|
||||
|
||||
- **REST & JSON-RPC APIs**
|
||||
- ✅ Implement REST routes: wallet lifecycle, account derivation, signing (message/tx/receipt), mock ledger endpoints.
|
||||
- ✅ Mirror functionality via JSON-RPC under `/rpc`.
|
||||
- ✅ Authentication token header enforcement and rate limits on signing operations.
|
||||
|
||||
- **Mock Ledger**
|
||||
- ✅ Implement SQLite-backed ledger with balances and transfers for local testing.
|
||||
- ✅ Provide REST endpoints to query balances and submit transfers.
|
||||
|
||||
- **Documentation & Examples**
|
||||
- ✅ Update deployment documentation with systemd service and nginx proxy configuration.
|
||||
- ✅ Document production endpoints and API access via https://aitbc.bubuit.net/wallet/
|
||||
- **Receipts**
|
||||
- ✅ Integrate `ReceiptVerifierService` consuming `CoordinatorReceiptClient` to fetch and validate receipts (miner + coordinator signatures).
|
||||
|
||||
## Production Deployment Details
|
||||
|
||||
- **Container**: Incus container 'aitbc' at `/opt/wallet-daemon/`
|
||||
- **Service**: systemd service `wallet-daemon.service` enabled and running
|
||||
- **Port**: 8002 (internal), proxied via nginx at `/wallet/`
|
||||
- **Dependencies**: Virtual environment with all required packages installed
|
||||
- **Access**: https://aitbc.bubuit.net/wallet/docs for API documentation
|
||||
|
||||
## Stage 2+ - IN PROGRESS
|
||||
|
||||
- Add ChainAdapter interface targeting real blockchain node RPC.
|
||||
- 🔄 Implement mock adapter first, followed by AITBC node adapter.
|
||||
- Support hardware-backed signing (YubiKey/PKCS#11) and multi-curve support gating.
|
||||
- Introduce webhook retry/backoff logic and structured logging with request IDs.
|
||||
@@ -1,270 +0,0 @@
|
||||
# Zero-Knowledge Applications in AITBC
|
||||
|
||||
This document describes the Zero-Knowledge (ZK) proof capabilities implemented in the AITBC platform.
|
||||
|
||||
## Overview
|
||||
|
||||
AITBC now supports privacy-preserving operations through ZK-SNARKs, allowing users to prove computations, membership, and other properties without revealing sensitive information.
|
||||
|
||||
## Available ZK Features
|
||||
|
||||
### 1. Identity Commitments
|
||||
|
||||
Create privacy-preserving identity commitments that allow you to prove you're a valid user without revealing your identity.
|
||||
|
||||
**Endpoint**: `POST /api/zk/identity/commit`
|
||||
|
||||
**Request**:
|
||||
```json
|
||||
{
|
||||
"salt": "optional_random_string"
|
||||
}
|
||||
```
|
||||
|
||||
**Response**:
|
||||
```json
|
||||
{
|
||||
"commitment": "hash_of_identity_and_salt",
|
||||
"salt": "used_salt",
|
||||
"user_id": "user_identifier",
|
||||
"created_at": "2025-12-28T17:50:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Stealth Addresses
|
||||
|
||||
Generate one-time payment addresses for enhanced privacy in transactions.
|
||||
|
||||
**Endpoint**: `POST /api/zk/stealth/address`
|
||||
|
||||
**Parameters**:
|
||||
- `recipient_public_key` (query): The recipient's public key
|
||||
|
||||
**Response**:
|
||||
```json
|
||||
{
|
||||
"stealth_address": "0x27b224d39bb988620a1447eb4bce6fc629e15331",
|
||||
"shared_secret_hash": "b9919ff990cd8793aa587cf5fd800efb997b6dcd...",
|
||||
"ephemeral_key": "ca8acd0ae4a9372cdaeef7eb3ac7eb10",
|
||||
"view_key": "0x5f7de2cc364f7c8d64ce1051c97a1ba6028f83d9"
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Private Receipt Attestation
|
||||
|
||||
Create receipts that prove computation occurred without revealing the actual computation details.
|
||||
|
||||
**Endpoint**: `POST /api/zk/receipt/attest`
|
||||
|
||||
**Parameters**:
|
||||
- `job_id` (query): Identifier of the computation job
|
||||
- `user_address` (query): Address of the user requesting computation
|
||||
- `computation_result` (query): Hash of the computation result
|
||||
- `privacy_level` (query): "basic", "medium", or "maximum"
|
||||
|
||||
**Response**:
|
||||
```json
|
||||
{
|
||||
"job_id": "job_123",
|
||||
"user_address": "0xabcdef",
|
||||
"commitment": "a6a8598788c066115dcc8ca35032dc60b89f2e138...",
|
||||
"privacy_level": "basic",
|
||||
"timestamp": "2025-12-28T17:51:26.758953",
|
||||
"verified": true
|
||||
}
|
||||
```
|
||||
|
||||
### 4. Group Membership Proofs
|
||||
|
||||
Prove membership in a group (miners, clients, developers) without revealing your identity.
|
||||
|
||||
**Endpoint**: `POST /api/zk/membership/verify`
|
||||
|
||||
**Request**:
|
||||
```json
|
||||
{
|
||||
"group_id": "miners",
|
||||
"nullifier": "unique_64_char_string",
|
||||
"proof": "zk_snark_proof_string"
|
||||
}
|
||||
```
|
||||
|
||||
### 5. Private Bidding
|
||||
|
||||
Submit bids to marketplace auctions without revealing the bid amount.
|
||||
|
||||
**Endpoint**: `POST /api/zk/marketplace/private-bid`
|
||||
|
||||
**Request**:
|
||||
```json
|
||||
{
|
||||
"auction_id": "auction_123",
|
||||
"bid_commitment": "hash_of_bid_and_salt",
|
||||
"proof": "proof_that_bid_is_in_valid_range"
|
||||
}
|
||||
```
|
||||
|
||||
### 6. Computation Proofs
|
||||
|
||||
Verify that AI computations were performed correctly without revealing the inputs.
|
||||
|
||||
**Endpoint**: `POST /api/zk/computation/verify`
|
||||
|
||||
**Request**:
|
||||
```json
|
||||
{
|
||||
"job_id": "job_456",
|
||||
"result_hash": "hash_of_computation_result",
|
||||
"proof_of_execution": "zk_snark_proof",
|
||||
"public_inputs": {}
|
||||
}
|
||||
```
|
||||
|
||||
## Anonymity Sets
|
||||
|
||||
View available anonymity sets for privacy operations:
|
||||
|
||||
**Endpoint**: `GET /api/zk/anonymity/sets`
|
||||
|
||||
**Response**:
|
||||
```json
|
||||
{
|
||||
"sets": {
|
||||
"miners": {
|
||||
"size": 100,
|
||||
"description": "Registered GPU miners",
|
||||
"type": "merkle_tree"
|
||||
},
|
||||
"clients": {
|
||||
"size": 500,
|
||||
"description": "Active clients",
|
||||
"type": "merkle_tree"
|
||||
},
|
||||
"transactions": {
|
||||
"size": 1000,
|
||||
"description": "Recent transactions",
|
||||
"type": "ring_signature"
|
||||
}
|
||||
},
|
||||
"min_anonymity": 3,
|
||||
"recommended_sets": ["miners", "clients"]
|
||||
}
|
||||
```
|
||||
|
||||
## Technical Implementation
|
||||
|
||||
### Circuit Compilation
|
||||
|
||||
The ZK circuits are compiled using:
|
||||
- **Circom**: v2.2.3
|
||||
- **Circomlib**: For standard circuit components
|
||||
- **SnarkJS**: For trusted setup and proof generation
|
||||
|
||||
### Trusted Setup
|
||||
|
||||
A complete trusted setup ceremony has been performed:
|
||||
1. Powers of Tau ceremony with 2^12 powers
|
||||
2. Phase 2 preparation for specific circuits
|
||||
3. Groth16 proving keys generated
|
||||
4. Verification keys exported
|
||||
|
||||
### Circuit Files
|
||||
|
||||
The following circuit files are deployed:
|
||||
- `receipt_simple_0001.zkey`: Proving key for receipt circuit
|
||||
- `receipt_simple.wasm`: WASM witness generator
|
||||
- `verification_key.json`: Verification key for on-chain verification
|
||||
|
||||
### Privacy Levels
|
||||
|
||||
1. **Basic**: Hash-based commitments (no ZK-SNARKs)
|
||||
2. **Medium**: Simple ZK proofs with limited constraints
|
||||
3. **Maximum**: Full ZK-SNARKs with complete privacy
|
||||
|
||||
## Security Considerations
|
||||
|
||||
1. **Trusted Setup**: The trusted setup was performed with proper entropy and multiple contributions
|
||||
2. **Randomness**: All operations use cryptographically secure random number generation
|
||||
3. **Nullifiers**: Prevent double-spending and replay attacks
|
||||
4. **Verification**: All proofs can be verified on-chain or off-chain
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
1. **Additional Circuits**: Membership and bid range circuits to be compiled
|
||||
2. **Recursive Proofs**: Enable proof composition for complex operations
|
||||
3. **On-Chain Verification**: Deploy verification contracts to blockchain
|
||||
4. **Hardware Acceleration**: GPU acceleration for proof generation
|
||||
|
||||
## API Status
|
||||
|
||||
Check the current status of ZK features:
|
||||
|
||||
**Endpoint**: `GET /api/zk/status`
|
||||
|
||||
This endpoint returns detailed information about:
|
||||
- Which ZK features are active
|
||||
- Circuit compilation status
|
||||
- Available proof types
|
||||
- Next steps for implementation
|
||||
|
||||
## Integration Guide
|
||||
|
||||
To integrate ZK proofs in your application:
|
||||
|
||||
1. **Generate Proof**: Use the appropriate endpoint to generate a proof
|
||||
2. **Submit Proof**: Include the proof in your transaction or API call
|
||||
3. **Verify Proof**: The system will automatically verify the proof
|
||||
4. **Privacy**: Your sensitive data remains private throughout the process
|
||||
|
||||
## Examples
|
||||
|
||||
### Private Marketplace Bid
|
||||
|
||||
```javascript
|
||||
// 1. Create bid commitment
|
||||
const bidAmount = 100;
|
||||
const salt = generateRandomSalt();
|
||||
const commitment = hash(bidAmount + salt);
|
||||
|
||||
// 2. Generate ZK proof that bid is within range
|
||||
const proof = await generateBidRangeProof(bidAmount, salt);
|
||||
|
||||
// 3. Submit private bid
|
||||
const response = await fetch('/api/zk/marketplace/private-bid', {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({
|
||||
auction_id: 'auction_123',
|
||||
bid_commitment: commitment,
|
||||
proof: proof
|
||||
})
|
||||
});
|
||||
```
|
||||
|
||||
### Stealth Address Payment
|
||||
|
||||
```javascript
|
||||
// 1. Generate stealth address for recipient
|
||||
const response = await fetch(
|
||||
'/api/zk/stealth/address?recipient_public_key=0x123...',
|
||||
{ method: 'POST' }
|
||||
);
|
||||
|
||||
const { stealth_address, view_key } = await response.json();
|
||||
|
||||
// 2. Send payment to stealth address
|
||||
await sendTransaction({
|
||||
to: stealth_address,
|
||||
amount: 1000
|
||||
});
|
||||
|
||||
// 3. Recipient can view funds using view_key
|
||||
const balance = await viewStealthAddressBalance(view_key);
|
||||
```
|
||||
|
||||
## Support
|
||||
|
||||
For questions about ZK applications:
|
||||
- Check the API documentation at `/docs/`
|
||||
- Review the status endpoint at `/api/zk/status`
|
||||
- Examine the circuit source code in `apps/zk-circuits/`
|
||||
@@ -1,185 +0,0 @@
|
||||
# Confidential Transactions Implementation Summary
|
||||
|
||||
## Overview
|
||||
|
||||
Successfully implemented a comprehensive confidential transaction system for AITBC with opt-in encryption, selective disclosure, and full audit compliance. The implementation provides privacy for sensitive transaction data while maintaining regulatory compliance.
|
||||
|
||||
## Completed Components
|
||||
|
||||
### 1. Encryption Service ✅
|
||||
- **Hybrid Encryption**: AES-256-GCM for data encryption, X25519 for key exchange
|
||||
- **Envelope Pattern**: Random DEK per transaction, encrypted for each participant
|
||||
- **Audit Escrow**: Separate encryption key for regulatory access
|
||||
- **Performance**: Efficient batch operations, key caching
|
||||
|
||||
### 2. Key Management ✅
|
||||
- **Per-Participant Keys**: X25519 key pairs for each participant
|
||||
- **Key Rotation**: Automated rotation with re-encryption of active data
|
||||
- **Secure Storage**: File-based storage (development), HSM-ready interface
|
||||
- **Access Control**: Role-based permissions for key operations
|
||||
|
||||
### 3. Access Control ✅
|
||||
- **Role-Based Policies**: Client, Miner, Coordinator, Auditor, Regulator roles
|
||||
- **Time Restrictions**: Business hours, retention periods
|
||||
- **Purpose-Based Access**: Settlement, Audit, Compliance, Dispute, Support
|
||||
- **Dynamic Policies**: Custom policy creation and management
|
||||
|
||||
### 4. Audit Logging ✅
|
||||
- **Tamper-Evident**: Chain of hashes for integrity verification
|
||||
- **Comprehensive**: All access, key operations, policy changes
|
||||
- **Export Capabilities**: JSON, CSV formats for regulators
|
||||
- **Retention**: Configurable retention periods by role
|
||||
|
||||
### 5. API Endpoints ✅
|
||||
- **/confidential/transactions**: Create and manage confidential transactions
|
||||
- **/confidential/access**: Request access to encrypted data
|
||||
- **/confidential/audit**: Regulatory access with authorization
|
||||
- **/confidential/keys**: Key registration and rotation
|
||||
- **Rate Limiting**: Protection against abuse
|
||||
|
||||
### 6. Data Models ✅
|
||||
- **ConfidentialTransaction**: Opt-in privacy flags
|
||||
- **Access Control Models**: Requests, responses, logs
|
||||
- **Key Management Models**: Registration, rotation, audit
|
||||
|
||||
## Security Features
|
||||
|
||||
### Encryption
|
||||
- AES-256-GCM provides confidentiality + integrity
|
||||
- X25519 ECDH for secure key exchange
|
||||
- Per-transaction DEKs for forward secrecy
|
||||
- Random IVs per encryption
|
||||
|
||||
### Access Control
|
||||
- Multi-factor authentication ready
|
||||
- Time-bound access permissions
|
||||
- Business hour restrictions for auditors
|
||||
- Retention period enforcement
|
||||
|
||||
### Audit Compliance
|
||||
- GDPR right to encryption
|
||||
- SEC Rule 17a-4 compliance
|
||||
- Immutable audit trails
|
||||
- Regulatory access with court orders
|
||||
|
||||
## Current Limitations
|
||||
|
||||
### 1. Database Persistence ❌
|
||||
- Current implementation uses mock storage
|
||||
- Needs SQLModel/SQLAlchemy integration
|
||||
- Transaction storage and querying
|
||||
- Encrypted data BLOB handling
|
||||
|
||||
### 2. Private Key Security ❌
|
||||
- File storage writes keys unencrypted
|
||||
- Needs HSM or KMS integration
|
||||
- Key escrow for recovery
|
||||
- Hardware security module support
|
||||
|
||||
### 3. Async Issues ❌
|
||||
- AuditLogger uses threading in async context
|
||||
- Needs asyncio task conversion
|
||||
- Background writer refactoring
|
||||
- Proper async/await patterns
|
||||
|
||||
### 4. Rate Limiting ⚠️
|
||||
- slowapi not properly integrated
|
||||
- Needs FastAPI app state setup
|
||||
- Distributed rate limiting for production
|
||||
- Redis backend for scalability
|
||||
|
||||
## Production Readiness Checklist
|
||||
|
||||
### Critical (Must Fix)
|
||||
- [ ] Database persistence layer
|
||||
- [ ] HSM/KMS integration for private keys
|
||||
- [ ] Fix async issues in audit logging
|
||||
- [ ] Proper rate limiting setup
|
||||
|
||||
### Important (Should Fix)
|
||||
- [ ] Performance optimization for high volume
|
||||
- [ ] Distributed key management
|
||||
- [ ] Backup and recovery procedures
|
||||
- [ ] Monitoring and alerting
|
||||
|
||||
### Nice to Have (Future)
|
||||
- [ ] Multi-party computation
|
||||
- [ ] Zero-knowledge proofs integration
|
||||
- [ ] Advanced privacy features
|
||||
- [ ] Cross-chain confidential settlements
|
||||
|
||||
## Testing Coverage
|
||||
|
||||
### Unit Tests ✅
|
||||
- Encryption/decryption correctness
|
||||
- Key management operations
|
||||
- Access control logic
|
||||
- Audit logging functionality
|
||||
|
||||
### Integration Tests ✅
|
||||
- End-to-end transaction flow
|
||||
- Cross-service integration
|
||||
- API endpoint testing
|
||||
- Error handling scenarios
|
||||
|
||||
### Performance Tests ⚠️
|
||||
- Basic benchmarks included
|
||||
- Needs load testing
|
||||
- Scalability assessment
|
||||
- Resource usage profiling
|
||||
|
||||
## Migration Strategy
|
||||
|
||||
### Phase 1: Infrastructure (Week 1-2)
|
||||
1. Implement database persistence
|
||||
2. Integrate HSM for key storage
|
||||
3. Fix async issues
|
||||
4. Set up proper rate limiting
|
||||
|
||||
### Phase 2: Security Hardening (Week 3-4)
|
||||
1. Security audit and penetration testing
|
||||
2. Implement additional monitoring
|
||||
3. Create backup procedures
|
||||
4. Document security controls
|
||||
|
||||
### Phase 3: Production Rollout (Month 2)
|
||||
1. Gradual rollout with feature flags
|
||||
2. Performance monitoring
|
||||
3. User training and documentation
|
||||
4. Compliance validation
|
||||
|
||||
## Compliance Status
|
||||
|
||||
### GDPR ✅
|
||||
- Right to encryption implemented
|
||||
- Data minimization by design
|
||||
- Privacy by default
|
||||
|
||||
### Financial Regulations ✅
|
||||
- SEC Rule 17a-4 audit logs
|
||||
- MiFID II transaction reporting
|
||||
- AML/KYC integration points
|
||||
|
||||
### Industry Standards ✅
|
||||
- ISO 27001 alignment
|
||||
- NIST Cybersecurity Framework
|
||||
- PCI DSS considerations
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. **Immediate**: Fix database persistence and HSM integration
|
||||
2. **Short-term**: Complete security hardening and testing
|
||||
3. **Long-term**: Production deployment and monitoring
|
||||
|
||||
## Documentation
|
||||
|
||||
- [Architecture Design](confidential-transactions.md)
|
||||
- [API Documentation](../docs/api/coordinator/endpoints.md)
|
||||
- [Security Guide](security-guidelines.md)
|
||||
- [Compliance Matrix](compliance-matrix.md)
|
||||
|
||||
## Conclusion
|
||||
|
||||
The confidential transaction system provides a solid foundation for privacy-preserving transactions in AITBC. While the core functionality is complete and tested, several production readiness items need to be addressed before deployment.
|
||||
|
||||
The modular design allows for incremental improvements and ensures the system can evolve with changing requirements and regulations.
|
||||
@@ -1,354 +0,0 @@
|
||||
# Confidential Transactions Architecture
|
||||
|
||||
## Overview
|
||||
|
||||
Design for opt-in confidential transaction support in AITBC, enabling participants to encrypt sensitive transaction data while maintaining selective disclosure and audit capabilities.
|
||||
|
||||
## Architecture
|
||||
|
||||
### Encryption Model
|
||||
|
||||
**Hybrid Encryption with Envelope Pattern**:
|
||||
1. **Data Encryption**: AES-256-GCM for transaction data
|
||||
2. **Key Exchange**: X25519 ECDH for per-recipient key distribution
|
||||
3. **Envelope Pattern**: Random DEK per transaction, encrypted for each authorized party
|
||||
|
||||
### Key Components
|
||||
|
||||
```
|
||||
┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
|
||||
│ Transaction │───▶│ Encryption │───▶│ Storage │
|
||||
│ Service │ │ Service │ │ Layer │
|
||||
└─────────────────┘ └──────────────────┘ └─────────────────┘
|
||||
│ │ │
|
||||
▼ ▼ ▼
|
||||
┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
|
||||
│ Key Manager │ │ Access Control │ │ Audit Log │
|
||||
└─────────────────┘ └──────────────────┘ └─────────────────┘
|
||||
```
|
||||
|
||||
## Data Flow
|
||||
|
||||
### 1. Transaction Creation (Opt-in)
|
||||
|
||||
```python
|
||||
# Client requests confidential transaction
|
||||
transaction = {
|
||||
"job_id": "job-123",
|
||||
"amount": "1000",
|
||||
"confidential": True,
|
||||
"participants": ["client-456", "miner-789", "auditor-001"]
|
||||
}
|
||||
|
||||
# Coordinator encrypts sensitive fields
|
||||
encrypted = encryption_service.encrypt(
|
||||
data={"amount": "1000", "pricing": "details"},
|
||||
participants=transaction["participants"]
|
||||
)
|
||||
|
||||
# Store with encrypted payload
|
||||
stored_transaction = {
|
||||
"job_id": "job-123",
|
||||
"public_data": {"job_id": "job-123"},
|
||||
"encrypted_data": encrypted.ciphertext,
|
||||
"encrypted_keys": encrypted.encrypted_keys,
|
||||
"confidential": True
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Data Access (Authorized Party)
|
||||
|
||||
```python
|
||||
# Miner requests access to transaction data
|
||||
access_request = {
|
||||
"transaction_id": "tx-456",
|
||||
"requester": "miner-789",
|
||||
"purpose": "settlement"
|
||||
}
|
||||
|
||||
# Verify access rights
|
||||
if access_control.verify(access_request):
|
||||
# Decrypt using recipient's private key
|
||||
decrypted = encryption_service.decrypt(
|
||||
ciphertext=stored_transaction.encrypted_data,
|
||||
encrypted_key=stored_transaction.encrypted_keys["miner-789"],
|
||||
private_key=miner_private_key
|
||||
)
|
||||
```
|
||||
|
||||
### 3. Audit Access (Regulatory)
|
||||
|
||||
```python
|
||||
# Auditor with court order requests access
|
||||
audit_request = {
|
||||
"transaction_id": "tx-456",
|
||||
"requester": "auditor-001",
|
||||
"authorization": "court-order-123"
|
||||
}
|
||||
|
||||
# Special audit key escrow
|
||||
audit_key = key_manager.get_audit_key(audit_request.authorization)
|
||||
decrypted = encryption_service.audit_decrypt(
|
||||
ciphertext=stored_transaction.encrypted_data,
|
||||
audit_key=audit_key
|
||||
)
|
||||
```
|
||||
|
||||
## Implementation Details
|
||||
|
||||
### Encryption Service
|
||||
|
||||
```python
|
||||
class ConfidentialTransactionService:
|
||||
"""Service for handling confidential transactions"""
|
||||
|
||||
def __init__(self, key_manager: KeyManager):
|
||||
self.key_manager = key_manager
|
||||
self.cipher = AES256GCM()
|
||||
|
||||
def encrypt(self, data: Dict, participants: List[str]) -> EncryptedData:
|
||||
"""Encrypt data for multiple participants"""
|
||||
# Generate random DEK
|
||||
dek = os.urandom(32)
|
||||
|
||||
# Encrypt data with DEK
|
||||
ciphertext = self.cipher.encrypt(dek, json.dumps(data))
|
||||
|
||||
# Encrypt DEK for each participant
|
||||
encrypted_keys = {}
|
||||
for participant in participants:
|
||||
public_key = self.key_manager.get_public_key(participant)
|
||||
encrypted_keys[participant] = self._encrypt_dek(dek, public_key)
|
||||
|
||||
# Add audit escrow
|
||||
audit_public_key = self.key_manager.get_audit_key()
|
||||
encrypted_keys["audit"] = self._encrypt_dek(dek, audit_public_key)
|
||||
|
||||
return EncryptedData(
|
||||
ciphertext=ciphertext,
|
||||
encrypted_keys=encrypted_keys,
|
||||
algorithm="AES-256-GCM+X25519"
|
||||
)
|
||||
|
||||
def decrypt(self, ciphertext: bytes, encrypted_key: bytes,
|
||||
private_key: bytes) -> Dict:
|
||||
"""Decrypt data for specific participant"""
|
||||
# Decrypt DEK
|
||||
dek = self._decrypt_dek(encrypted_key, private_key)
|
||||
|
||||
# Decrypt data
|
||||
plaintext = self.cipher.decrypt(dek, ciphertext)
|
||||
return json.loads(plaintext)
|
||||
```
|
||||
|
||||
### Key Management
|
||||
|
||||
```python
|
||||
class KeyManager:
|
||||
"""Manages encryption keys for participants"""
|
||||
|
||||
def __init__(self, storage: KeyStorage):
|
||||
self.storage = storage
|
||||
self.key_pairs = {}
|
||||
|
||||
def generate_key_pair(self, participant_id: str) -> KeyPair:
|
||||
"""Generate X25519 key pair for participant"""
|
||||
private_key = X25519.generate_private_key()
|
||||
public_key = private_key.public_key()
|
||||
|
||||
key_pair = KeyPair(
|
||||
participant_id=participant_id,
|
||||
private_key=private_key,
|
||||
public_key=public_key
|
||||
)
|
||||
|
||||
self.storage.store(key_pair)
|
||||
return key_pair
|
||||
|
||||
def rotate_keys(self, participant_id: str):
|
||||
"""Rotate encryption keys"""
|
||||
# Generate new key pair
|
||||
new_key_pair = self.generate_key_pair(participant_id)
|
||||
|
||||
# Re-encrypt active transactions
|
||||
self._reencrypt_transactions(participant_id, new_key_pair)
|
||||
```
|
||||
|
||||
### Access Control
|
||||
|
||||
```python
|
||||
class AccessController:
|
||||
"""Controls access to confidential transaction data"""
|
||||
|
||||
def __init__(self, policy_store: PolicyStore):
|
||||
self.policy_store = policy_store
|
||||
|
||||
def verify_access(self, request: AccessRequest) -> bool:
|
||||
"""Verify if requester has access rights"""
|
||||
# Check participant status
|
||||
if not self._is_authorized_participant(request.requester):
|
||||
return False
|
||||
|
||||
# Check purpose-based access
|
||||
if not self._check_purpose(request.purpose, request.requester):
|
||||
return False
|
||||
|
||||
# Check time-based restrictions
|
||||
if not self._check_time_restrictions(request):
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
def _is_authorized_participant(self, participant_id: str) -> bool:
|
||||
"""Check if participant is authorized for confidential transactions"""
|
||||
# Verify KYC/KYB status
|
||||
# Check compliance flags
|
||||
# Validate regulatory approval
|
||||
return True
|
||||
```
|
||||
|
||||
## Data Models
|
||||
|
||||
### Confidential Transaction
|
||||
|
||||
```python
|
||||
class ConfidentialTransaction(BaseModel):
|
||||
"""Transaction with optional confidential fields"""
|
||||
|
||||
# Public fields (always visible)
|
||||
transaction_id: str
|
||||
job_id: str
|
||||
timestamp: datetime
|
||||
status: str
|
||||
|
||||
# Confidential fields (encrypted when opt-in)
|
||||
amount: Optional[str] = None
|
||||
pricing: Optional[Dict] = None
|
||||
settlement_details: Optional[Dict] = None
|
||||
|
||||
# Encryption metadata
|
||||
confidential: bool = False
|
||||
encrypted_data: Optional[bytes] = None
|
||||
encrypted_keys: Optional[Dict[str, bytes]] = None
|
||||
algorithm: Optional[str] = None
|
||||
|
||||
# Access control
|
||||
participants: List[str] = []
|
||||
access_policies: Dict[str, Any] = {}
|
||||
```
|
||||
|
||||
### Access Log
|
||||
|
||||
```python
|
||||
class ConfidentialAccessLog(BaseModel):
|
||||
"""Audit log for confidential data access"""
|
||||
|
||||
transaction_id: str
|
||||
requester: str
|
||||
purpose: str
|
||||
timestamp: datetime
|
||||
authorized_by: str
|
||||
data_accessed: List[str]
|
||||
ip_address: str
|
||||
user_agent: str
|
||||
```
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### 1. Key Security
|
||||
- Private keys stored in HSM or secure enclave
|
||||
- Key rotation every 90 days
|
||||
- Zero-knowledge proof of key possession
|
||||
|
||||
### 2. Data Protection
|
||||
- AES-256-GCM provides confidentiality + integrity
|
||||
- Random IV per encryption
|
||||
- Forward secrecy with per-transaction DEKs
|
||||
|
||||
### 3. Access Control
|
||||
- Multi-factor authentication for decryption
|
||||
- Role-based access control
|
||||
- Time-bound access permissions
|
||||
|
||||
### 4. Audit Compliance
|
||||
- Immutable audit logs
|
||||
- Regulatory access with court orders
|
||||
- Privacy-preserving audit proofs
|
||||
|
||||
## Performance Optimization
|
||||
|
||||
### 1. Lazy Encryption
|
||||
- Only encrypt fields marked as confidential
|
||||
- Cache encrypted data for frequent access
|
||||
- Batch encryption for bulk operations
|
||||
|
||||
### 2. Key Management
|
||||
- Pre-compute shared secrets for regular participants
|
||||
- Use key derivation for multiple access levels
|
||||
- Implement key caching with secure eviction
|
||||
|
||||
### 3. Storage Optimization
|
||||
- Compress encrypted data
|
||||
- Deduplicate common encrypted patterns
|
||||
- Use column-level encryption for databases
|
||||
|
||||
## Migration Strategy
|
||||
|
||||
### Phase 1: Opt-in Support
|
||||
- Add confidential flags to existing models
|
||||
- Deploy encryption service
|
||||
- Update transaction endpoints
|
||||
|
||||
### Phase 2: Participant Onboarding
|
||||
- Generate key pairs for all participants
|
||||
- Implement key distribution
|
||||
- Train users on privacy features
|
||||
|
||||
### Phase 3: Full Rollout
|
||||
- Enable confidential transactions by default for sensitive data
|
||||
- Implement advanced access controls
|
||||
- Add privacy analytics and reporting
|
||||
|
||||
## Testing Strategy
|
||||
|
||||
### 1. Unit Tests
|
||||
- Encryption/decryption correctness
|
||||
- Key management operations
|
||||
- Access control logic
|
||||
|
||||
### 2. Integration Tests
|
||||
- End-to-end confidential transaction flow
|
||||
- Cross-system key exchange
|
||||
- Audit trail verification
|
||||
|
||||
### 3. Security Tests
|
||||
- Penetration testing
|
||||
- Cryptographic validation
|
||||
- Side-channel resistance
|
||||
|
||||
## Compliance
|
||||
|
||||
### 1. GDPR
|
||||
- Right to encryption
|
||||
- Data minimization
|
||||
- Privacy by design
|
||||
|
||||
### 2. Financial Regulations
|
||||
- SEC Rule 17a-4
|
||||
- MiFID II transaction reporting
|
||||
- AML/KYC requirements
|
||||
|
||||
### 3. Industry Standards
|
||||
- ISO 27001
|
||||
- NIST Cybersecurity Framework
|
||||
- PCI DSS for payment data
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. Implement core encryption service
|
||||
2. Create key management infrastructure
|
||||
3. Update transaction models and APIs
|
||||
4. Deploy access control system
|
||||
5. Implement audit logging
|
||||
6. Conduct security testing
|
||||
7. Gradual rollout with monitoring
|
||||
@@ -1,192 +0,0 @@
|
||||
# AITBC Documentation Gaps Report
|
||||
|
||||
This document identifies missing documentation for completed features based on the `done.md` file and current documentation state.
|
||||
|
||||
## Critical Missing Documentation
|
||||
|
||||
### 1. Zero-Knowledge Proof Receipt Attestation
|
||||
**Status**: ✅ Completed (Implementation in Stage 7)
|
||||
**Missing Documentation**:
|
||||
- [ ] User guide: How to use ZK proofs for receipt attestation
|
||||
- [ ] Developer guide: Integrating ZK proofs into applications
|
||||
- [ ] Operator guide: Setting up ZK proof generation service
|
||||
- [ ] API reference: ZK proof endpoints and parameters
|
||||
- [ ] Tutorial: End-to-end ZK proof workflow
|
||||
|
||||
**Priority**: High - Complex feature requiring user education
|
||||
|
||||
### 2. Confidential Transactions
|
||||
**Status**: ✅ Completed (Implementation in Stage 7)
|
||||
**Existing**: Technical implementation docs
|
||||
**Missing Documentation**:
|
||||
- [ ] User guide: How to create confidential transactions
|
||||
- [ ] Developer guide: Building privacy-preserving applications
|
||||
- [ ] Migration guide: Moving from regular to confidential transactions
|
||||
- [ ] Security considerations: Best practices for confidential transactions
|
||||
|
||||
**Priority**: High - Security-sensitive feature
|
||||
|
||||
### 3. HSM Key Management
|
||||
**Status**: ✅ Completed (Implementation in Stage 7)
|
||||
**Missing Documentation**:
|
||||
- [ ] Operator guide: HSM setup and configuration
|
||||
- [ ] Integration guide: Azure Key Vault integration
|
||||
- [ ] Integration guide: AWS KMS integration
|
||||
- [ ] Security guide: HSM best practices
|
||||
- [ ] Troubleshooting: Common HSM issues
|
||||
|
||||
**Priority**: High - Enterprise feature
|
||||
|
||||
### 4. Multi-tenant Coordinator Infrastructure
|
||||
**Status**: ✅ Completed (Implementation in Stage 7)
|
||||
**Missing Documentation**:
|
||||
- [ ] Architecture guide: Multi-tenant architecture overview
|
||||
- [ ] Operator guide: Setting up multi-tenant infrastructure
|
||||
- [ ] Tenant management: Creating and managing tenants
|
||||
- [ ] Billing guide: Understanding billing and quotas
|
||||
- [ ] Migration guide: Moving to multi-tenant setup
|
||||
|
||||
**Priority**: High - Major architectural change
|
||||
|
||||
### 5. Enterprise Connectors (Python SDK)
|
||||
**Status**: ✅ Completed (Implementation in Stage 7)
|
||||
**Existing**: Technical implementation
|
||||
**Missing Documentation**:
|
||||
- [ ] Quick start: Getting started with enterprise connectors
|
||||
- [ ] Connector guide: Stripe connector usage
|
||||
- [ ] Connector guide: ERP connector usage
|
||||
- [ ] Development guide: Building custom connectors
|
||||
- [ ] Reference: Complete API documentation
|
||||
|
||||
**Priority**: Medium - Developer-facing feature
|
||||
|
||||
### 6. Ecosystem Certification Program
|
||||
**Status**: ✅ Completed (Implementation in Stage 7)
|
||||
**Existing**: Program documentation
|
||||
**Missing Documentation**:
|
||||
- [ ] Participant guide: How to get certified
|
||||
- [ ] Self-service portal: Using the certification portal
|
||||
- [ ] Badge guide: Displaying certification badges
|
||||
- [ ] Maintenance guide: Maintaining certification status
|
||||
|
||||
**Priority**: Medium - Program adoption
|
||||
|
||||
## Moderate Priority Gaps
|
||||
|
||||
### 7. Cross-Chain Settlement
|
||||
**Status**: ✅ Completed (Implementation in Stage 6)
|
||||
**Existing**: Design documentation
|
||||
**Missing Documentation**:
|
||||
- [ ] Integration guide: Setting up cross-chain bridges
|
||||
- [ ] Tutorial: Cross-chain transaction walkthrough
|
||||
- [ ] Reference: Bridge API documentation
|
||||
|
||||
### 8. GPU Service Registry (30+ Services)
|
||||
**Status**: ✅ Completed (Implementation in Stage 7)
|
||||
**Missing Documentation**:
|
||||
- [ ] Provider guide: Registering GPU services
|
||||
- [ ] Service catalog: Available service types
|
||||
- [ ] Pricing guide: Setting service prices
|
||||
- [ ] Integration guide: Using GPU services
|
||||
|
||||
### 9. Advanced Cryptography Features
|
||||
**Status**: ✅ Completed (Implementation in Stage 7)
|
||||
**Missing Documentation**:
|
||||
- [ ] Hybrid encryption guide: Using AES-256-GCM + X25519
|
||||
- [ ] Role-based access control: Setting up RBAC
|
||||
- [ ] Audit logging: Configuring tamper-evident logging
|
||||
|
||||
## Low Priority Gaps
|
||||
|
||||
### 10. Community & Governance
|
||||
**Status**: ✅ Completed (Implementation in Stage 7)
|
||||
**Existing**: Framework documentation
|
||||
**Missing Documentation**:
|
||||
- [ ] Governance website: User guide for governance site
|
||||
- [ ] RFC templates: Detailed RFC writing guide
|
||||
- [ ] Community metrics: Understanding KPIs
|
||||
|
||||
### 11. Ecosystem Growth Initiatives
|
||||
**Status**: ✅ Completed (Implementation in Stage 7)
|
||||
**Existing**: Program documentation
|
||||
**Missing Documentation**:
|
||||
- [ ] Hackathon platform: Using the submission platform
|
||||
- [ ] Grant tracking: Monitoring grant progress
|
||||
- [ ] Extension marketplace: Publishing extensions
|
||||
|
||||
## Documentation Structure Improvements
|
||||
|
||||
### Missing Sections
|
||||
1. **Migration Guides** - No migration documentation for major changes
|
||||
2. **Troubleshooting** - Limited troubleshooting guides
|
||||
3. **Best Practices** - Few best practice documents
|
||||
4. **Performance Guides** - No performance optimization guides
|
||||
5. **Security Guides** - Limited security documentation beyond threat modeling
|
||||
|
||||
### Outdated Documentation
|
||||
1. **API References** - May not reflect latest endpoints
|
||||
2. **Installation Guides** - May not include all components
|
||||
3. **Configuration** - Missing new configuration options
|
||||
|
||||
## Recommended Actions
|
||||
|
||||
### Immediate (Next Sprint)
|
||||
1. Create ZK proof user guide and developer tutorial
|
||||
2. Document HSM integration for Azure Key Vault and AWS KMS
|
||||
3. Write multi-tenant setup guide for operators
|
||||
4. Create confidential transaction quick start
|
||||
|
||||
### Short Term (Next Month)
|
||||
1. Complete enterprise connector documentation
|
||||
2. Add cross-chain settlement integration guides
|
||||
3. Document GPU service provider workflow
|
||||
4. Create migration guides for major features
|
||||
|
||||
### Medium Term (Next Quarter)
|
||||
1. Expand troubleshooting section
|
||||
2. Add performance optimization guides
|
||||
3. Create security best practices documentation
|
||||
4. Build interactive tutorials for complex features
|
||||
|
||||
### Long Term (Next 6 Months)
|
||||
1. Create video tutorials for key workflows
|
||||
2. Build interactive API documentation
|
||||
3. Add regional deployment guides
|
||||
4. Create compliance documentation for regulated markets
|
||||
|
||||
## Documentation Metrics
|
||||
|
||||
### Current State
|
||||
- Total markdown files: 65+
|
||||
- Organized into: 5 main categories
|
||||
- Missing critical docs: 11 major features
|
||||
- Coverage estimate: 60% of completed features documented
|
||||
|
||||
### Target State
|
||||
- Critical features: 100% documented
|
||||
- User guides: All major features
|
||||
- Developer resources: Complete API coverage
|
||||
- Operator guides: All deployment scenarios
|
||||
|
||||
## Resources Needed
|
||||
|
||||
### Writers
|
||||
- Technical writer: 1 FTE for 3 months
|
||||
- Developer advocates: 2 FTE for tutorials
|
||||
- Security specialist: For security documentation
|
||||
|
||||
### Tools
|
||||
- Documentation platform: GitBook or Docusaurus
|
||||
- API documentation: Swagger/OpenAPI tools
|
||||
- Interactive tutorials: CodeSandbox or similar
|
||||
|
||||
### Process
|
||||
- Documentation review workflow
|
||||
- Translation process for internationalization
|
||||
- Community contribution process for docs
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: 2024-01-15
|
||||
**Next Review**: 2024-02-15
|
||||
**Owner**: Documentation Team
|
||||
@@ -1,230 +0,0 @@
|
||||
# AITBC Enterprise Integration SLA
|
||||
|
||||
## Overview
|
||||
|
||||
This document outlines the Service Level Agreement (SLA) for enterprise integrations with the AITBC network, including uptime guarantees, performance expectations, and support commitments.
|
||||
|
||||
## Document Version
|
||||
- Version: 1.0
|
||||
- Date: December 2024
|
||||
- Effective Date: January 1, 2025
|
||||
|
||||
## Service Availability
|
||||
|
||||
### Coordinator API
|
||||
- **Uptime Guarantee**: 99.9% monthly (excluding scheduled maintenance)
|
||||
- **Scheduled Maintenance**: Maximum 4 hours per month, announced 72 hours in advance
|
||||
- **Emergency Maintenance**: Maximum 2 hours per month, announced 2 hours in advance
|
||||
|
||||
### Mining Pool Network
|
||||
- **Network Uptime**: 99.5% monthly
|
||||
- **Minimum Active Miners**: 1000 miners globally distributed
|
||||
- **Geographic Distribution**: Minimum 3 continents, 5 countries
|
||||
|
||||
### Settlement Layer
|
||||
- **Confirmation Time**: 95% of transactions confirmed within 30 seconds
|
||||
- **Cross-Chain Bridge**: 99% availability for supported chains
|
||||
- **Finality**: 99.9% of transactions final after 2 confirmations
|
||||
|
||||
## Performance Metrics
|
||||
|
||||
### API Response Times
|
||||
| Endpoint | 50th Percentile | 95th Percentile | 99th Percentile |
|
||||
|----------|-----------------|-----------------|-----------------|
|
||||
| Job Submission | 50ms | 100ms | 200ms |
|
||||
| Job Status | 25ms | 50ms | 100ms |
|
||||
| Receipt Verification | 100ms | 200ms | 500ms |
|
||||
| Settlement Initiation | 150ms | 300ms | 1000ms |
|
||||
|
||||
### Throughput Limits
|
||||
| Service | Rate Limit | Burst Limit |
|
||||
|---------|------------|------------|
|
||||
| Job Submission | 1000/minute | 100/minute |
|
||||
| API Calls | 10,000/minute | 1000/minute |
|
||||
| Webhook Events | 5000/minute | 500/minute |
|
||||
|
||||
### Data Processing
|
||||
- **Proof Generation**: Average 2 seconds, 95% under 5 seconds
|
||||
- **ZK Verification**: Average 100ms, 95% under 200ms
|
||||
- **Encryption/Decryption**: Average 50ms, 95% under 100ms
|
||||
|
||||
## Support Services
|
||||
|
||||
### Support Tiers
|
||||
| Tier | Response Time | Availability | Escalation |
|
||||
|------|---------------|--------------|------------|
|
||||
| Enterprise | 1 hour (P1), 4 hours (P2), 24 hours (P3) | 24x7x365 | Direct to engineering |
|
||||
| Business | 4 hours (P1), 24 hours (P2), 48 hours (P3) | Business hours | Technical lead |
|
||||
| Developer | 24 hours (P1), 72 hours (P2), 5 days (P3) | Business hours | Support team |
|
||||
|
||||
### Incident Management
|
||||
- **P1 - Critical**: System down, data loss, security breach
|
||||
- **P2 - High**: Significant feature degradation, performance impact
|
||||
- **P3 - Medium**: Feature not working, documentation issues
|
||||
- **P4 - Low**: General questions, enhancement requests
|
||||
|
||||
### Maintenance Windows
|
||||
- **Regular Maintenance**: Every Sunday 02:00-04:00 UTC
|
||||
- **Security Updates**: As needed, minimum 24 hours notice
|
||||
- **Major Upgrades**: Quarterly, minimum 30 days notice
|
||||
|
||||
## Data Management
|
||||
|
||||
### Data Retention
|
||||
| Data Type | Retention Period | Archival |
|
||||
|-----------|------------------|----------|
|
||||
| Transaction Records | 7 years | Yes |
|
||||
| Audit Logs | 7 years | Yes |
|
||||
| Performance Metrics | 2 years | Yes |
|
||||
| Error Logs | 90 days | No |
|
||||
| Debug Logs | 30 days | No |
|
||||
|
||||
### Data Availability
|
||||
- **Backup Frequency**: Every 15 minutes
|
||||
- **Recovery Point Objective (RPO)**: 15 minutes
|
||||
- **Recovery Time Objective (RTO)**: 4 hours
|
||||
- **Geographic Redundancy**: 3 regions, cross-replicated
|
||||
|
||||
### Privacy and Compliance
|
||||
- **GDPR Compliant**: Yes
|
||||
- **Data Processing Agreement**: Available
|
||||
- **Privacy Impact Assessment**: Completed
|
||||
- **Certifications**: ISO 27001, SOC 2 Type II
|
||||
|
||||
## Integration SLAs
|
||||
|
||||
### ERP Connectors
|
||||
| Metric | Target |
|
||||
|--------|--------|
|
||||
| Sync Latency | < 5 minutes |
|
||||
| Data Accuracy | 99.99% |
|
||||
| Error Rate | < 0.1% |
|
||||
| Retry Success Rate | > 99% |
|
||||
|
||||
### Payment Processors
|
||||
| Metric | Target |
|
||||
|--------|--------|
|
||||
| Settlement Time | < 2 minutes |
|
||||
| Success Rate | 99.9% |
|
||||
| Fraud Detection | < 0.01% false positive |
|
||||
| Chargeback Handling | 24 hours |
|
||||
|
||||
### Webhook Delivery
|
||||
- **Delivery Guarantee**: 99.5% successful delivery
|
||||
- **Retry Policy**: Exponential backoff, max 10 attempts
|
||||
- **Timeout**: 30 seconds per attempt
|
||||
- **Verification**: HMAC-SHA256 signatures
|
||||
|
||||
## Security Commitments
|
||||
|
||||
### Availability
|
||||
- **DDoS Protection**: 99.9% mitigation success
|
||||
- **Incident Response**: < 1 hour detection, < 4 hours containment
|
||||
- **Vulnerability Patching**: Critical patches within 24 hours
|
||||
|
||||
### Encryption Standards
|
||||
- **In Transit**: TLS 1.3 minimum
|
||||
- **At Rest**: AES-256 encryption
|
||||
- **Key Management**: HSM-backed, regular rotation
|
||||
- **Compliance**: FIPS 140-2 Level 3
|
||||
|
||||
## Penalties and Credits
|
||||
|
||||
### Service Credits
|
||||
| Downtime | Credit Percentage |
|
||||
|----------|------------------|
|
||||
| < 99.9% uptime | 10% |
|
||||
| < 99.5% uptime | 25% |
|
||||
| < 99.0% uptime | 50% |
|
||||
| < 98.0% uptime | 100% |
|
||||
|
||||
### Performance Credits
|
||||
| Metric Miss | Credit |
|
||||
|-------------|--------|
|
||||
| Response time > 95th percentile | 5% |
|
||||
| Throughput limit exceeded | 10% |
|
||||
| Data loss > RPO | 100% |
|
||||
|
||||
### Claim Process
|
||||
1. Submit ticket within 30 days of incident
|
||||
2. Provide evidence of SLA breach
|
||||
3. Review within 5 business days
|
||||
4. Credit applied to next invoice
|
||||
|
||||
## Exclusions
|
||||
|
||||
### Force Majeure
|
||||
- Natural disasters
|
||||
- War, terrorism, civil unrest
|
||||
- Government actions
|
||||
- Internet outages beyond control
|
||||
|
||||
### Customer Responsibilities
|
||||
- Proper API implementation
|
||||
- Adequate error handling
|
||||
- Rate limit compliance
|
||||
- Security best practices
|
||||
|
||||
### Third-Party Dependencies
|
||||
- External payment processors
|
||||
- Cloud provider outages
|
||||
- Blockchain network congestion
|
||||
- DNS issues
|
||||
|
||||
## Monitoring and Reporting
|
||||
|
||||
### Available Metrics
|
||||
- Real-time dashboard
|
||||
- Historical reports (24 months)
|
||||
- API usage analytics
|
||||
- Performance benchmarks
|
||||
|
||||
### Custom Reports
|
||||
- Monthly SLA reports
|
||||
- Quarterly business reviews
|
||||
- Annual security assessments
|
||||
- Custom KPI tracking
|
||||
|
||||
### Alerting
|
||||
- Email notifications
|
||||
- SMS for critical issues
|
||||
- Webhook callbacks
|
||||
- Slack integration
|
||||
|
||||
## Contact Information
|
||||
|
||||
### Support
|
||||
- **Enterprise Support**: enterprise@aitbc.io
|
||||
- **Technical Support**: support@aitbc.io
|
||||
- **Security Issues**: security@aitbc.io
|
||||
- **Emergency Hotline**: +1-555-SECURITY
|
||||
|
||||
### Account Management
|
||||
- **Enterprise Customers**: account@aitbc.io
|
||||
- **Partners**: partners@aitbc.io
|
||||
- **Billing**: billing@aitbc.io
|
||||
|
||||
## Definitions
|
||||
|
||||
### Terms
|
||||
- **Uptime**: Percentage of time services are available and functional
|
||||
- **Response Time**: Time from request receipt to first byte of response
|
||||
- **Throughput**: Number of requests processed per time unit
|
||||
- **Error Rate**: Percentage of requests resulting in errors
|
||||
|
||||
### Calculations
|
||||
- Monthly uptime calculated as (total minutes - downtime) / total minutes
|
||||
- Percentiles measured over trailing 30-day period
|
||||
- Credits calculated on monthly service fees
|
||||
|
||||
## Amendments
|
||||
|
||||
This SLA may be amended with:
|
||||
- 30 days written notice for non-material changes
|
||||
- 90 days written notice for material changes
|
||||
- Mutual agreement for custom terms
|
||||
- Immediate notice for security updates
|
||||
|
||||
---
|
||||
|
||||
*This SLA is part of the Enterprise Integration Agreement and is subject to the terms and conditions therein.*
|
||||
@@ -1,333 +0,0 @@
|
||||
# Governance Module
|
||||
|
||||
The AITBC governance module enables decentralized decision-making through proposal voting and parameter changes.
|
||||
|
||||
## Overview
|
||||
|
||||
The governance system allows AITBC token holders to:
|
||||
- Create proposals for protocol changes
|
||||
- Vote on active proposals
|
||||
- Execute approved proposals
|
||||
- Track governance history
|
||||
|
||||
## API Endpoints
|
||||
|
||||
### Get Governance Parameters
|
||||
|
||||
Retrieve current governance system parameters.
|
||||
|
||||
```http
|
||||
GET /api/v1/governance/parameters
|
||||
```
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"min_proposal_voting_power": 1000,
|
||||
"max_proposal_title_length": 200,
|
||||
"max_proposal_description_length": 5000,
|
||||
"default_voting_period_days": 7,
|
||||
"max_voting_period_days": 30,
|
||||
"min_quorum_threshold": 0.01,
|
||||
"max_quorum_threshold": 1.0,
|
||||
"min_approval_threshold": 0.01,
|
||||
"max_approval_threshold": 1.0,
|
||||
"execution_delay_hours": 24
|
||||
}
|
||||
```
|
||||
|
||||
### List Proposals
|
||||
|
||||
Get a list of governance proposals.
|
||||
|
||||
```http
|
||||
GET /api/v1/governance/proposals?status={status}&limit={limit}&offset={offset}
|
||||
```
|
||||
|
||||
**Query Parameters:**
|
||||
- `status` (optional): Filter by proposal status (`active`, `passed`, `rejected`, `executed`)
|
||||
- `limit` (optional): Number of proposals to return (default: 20)
|
||||
- `offset` (optional): Number of proposals to skip (default: 0)
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
[
|
||||
{
|
||||
"id": "proposal-uuid",
|
||||
"title": "Proposal Title",
|
||||
"description": "Description of the proposal",
|
||||
"type": "parameter_change",
|
||||
"target": {},
|
||||
"proposer": "user-address",
|
||||
"status": "active",
|
||||
"created_at": "2025-12-28T18:00:00Z",
|
||||
"voting_deadline": "2025-12-28T18:00:00Z",
|
||||
"quorum_threshold": 0.1,
|
||||
"approval_threshold": 0.5,
|
||||
"current_quorum": 0.15,
|
||||
"current_approval": 0.75,
|
||||
"votes_for": 150,
|
||||
"votes_against": 50,
|
||||
"votes_abstain": 10,
|
||||
"total_voting_power": 1000000
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
### Create Proposal
|
||||
|
||||
Create a new governance proposal.
|
||||
|
||||
```http
|
||||
POST /api/v1/governance/proposals
|
||||
```
|
||||
|
||||
**Request Body:**
|
||||
```json
|
||||
{
|
||||
"title": "Reduce Transaction Fees",
|
||||
"description": "This proposal suggests reducing transaction fees...",
|
||||
"type": "parameter_change",
|
||||
"target": {
|
||||
"fee_percentage": "0.05"
|
||||
},
|
||||
"voting_period": 7,
|
||||
"quorum_threshold": 0.1,
|
||||
"approval_threshold": 0.5
|
||||
}
|
||||
```
|
||||
|
||||
**Fields:**
|
||||
- `title`: Proposal title (10-200 characters)
|
||||
- `description`: Detailed description (50-5000 characters)
|
||||
- `type`: Proposal type (`parameter_change`, `protocol_upgrade`, `fund_allocation`, `policy_change`)
|
||||
- `target`: Target configuration for the proposal
|
||||
- `voting_period`: Voting period in days (1-30)
|
||||
- `quorum_threshold`: Minimum participation percentage (0.01-1.0)
|
||||
- `approval_threshold`: Minimum approval percentage (0.01-1.0)
|
||||
|
||||
### Get Proposal
|
||||
|
||||
Get details of a specific proposal.
|
||||
|
||||
```http
|
||||
GET /api/v1/governance/proposals/{proposal_id}
|
||||
```
|
||||
|
||||
### Submit Vote
|
||||
|
||||
Submit a vote on a proposal.
|
||||
|
||||
```http
|
||||
POST /api/v1/governance/vote
|
||||
```
|
||||
|
||||
**Request Body:**
|
||||
```json
|
||||
{
|
||||
"proposal_id": "proposal-uuid",
|
||||
"vote": "for",
|
||||
"reason": "I support this change because..."
|
||||
}
|
||||
```
|
||||
|
||||
**Fields:**
|
||||
- `proposal_id`: ID of the proposal to vote on
|
||||
- `vote`: Vote option (`for`, `against`, `abstain`)
|
||||
- `reason` (optional): Reason for the vote (max 500 characters)
|
||||
|
||||
### Get Voting Power
|
||||
|
||||
Check a user's voting power.
|
||||
|
||||
```http
|
||||
GET /api/v1/governance/voting-power/{user_id}
|
||||
```
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"user_id": "user-address",
|
||||
"voting_power": 10000
|
||||
}
|
||||
```
|
||||
|
||||
### Execute Proposal
|
||||
|
||||
Execute an approved proposal.
|
||||
|
||||
```http
|
||||
POST /api/v1/governance/execute/{proposal_id}
|
||||
```
|
||||
|
||||
**Note:** Proposals can only be executed after:
|
||||
1. Voting period has ended
|
||||
2. Quorum threshold is met
|
||||
3. Approval threshold is met
|
||||
4. 24-hour execution delay has passed
|
||||
|
||||
## Proposal Types
|
||||
|
||||
### Parameter Change
|
||||
Modify system parameters like fees, limits, or thresholds.
|
||||
|
||||
**Example Target:**
|
||||
```json
|
||||
{
|
||||
"transaction_fee": "0.05",
|
||||
"min_stake_amount": "1000",
|
||||
"max_block_size": "2000"
|
||||
}
|
||||
```
|
||||
|
||||
### Protocol Upgrade
|
||||
Initiate a protocol upgrade with version changes.
|
||||
|
||||
**Example Target:**
|
||||
```json
|
||||
{
|
||||
"version": "1.2.0",
|
||||
"upgrade_type": "hard_fork",
|
||||
"activation_block": 1000000,
|
||||
"changes": {
|
||||
"new_features": ["feature1", "feature2"],
|
||||
"breaking_changes": ["change1"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Fund Allocation
|
||||
Allocate funds from the treasury.
|
||||
|
||||
**Example Target:**
|
||||
```json
|
||||
{
|
||||
"amount": "1000000",
|
||||
"recipient": "0x123...",
|
||||
"purpose": "Ecosystem development fund",
|
||||
"milestones": [
|
||||
{
|
||||
"description": "Phase 1 development",
|
||||
"amount": "500000",
|
||||
"deadline": "2025-06-30"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Policy Change
|
||||
Update governance or operational policies.
|
||||
|
||||
**Example Target:**
|
||||
```json
|
||||
{
|
||||
"policy_name": "voting_period",
|
||||
"new_value": "14 days",
|
||||
"rationale": "Longer voting period for better participation"
|
||||
}
|
||||
```
|
||||
|
||||
## Voting Process
|
||||
|
||||
1. **Proposal Creation**: Any user with sufficient voting power can create a proposal
|
||||
2. **Voting Period**: Token holders vote during the specified voting period
|
||||
3. **Quorum Check**: Minimum participation must be met
|
||||
4. **Approval Check**: Minimum approval ratio must be met
|
||||
5. **Execution Delay**: 24-hour delay before execution
|
||||
6. **Execution**: Approved changes are implemented
|
||||
|
||||
## Database Schema
|
||||
|
||||
### GovernanceProposal
|
||||
- `id`: Unique proposal identifier
|
||||
- `title`: Proposal title
|
||||
- `description`: Detailed description
|
||||
- `type`: Proposal type
|
||||
- `target`: Target configuration (JSON)
|
||||
- `proposer`: Address of the proposer
|
||||
- `status`: Current status
|
||||
- `created_at`: Creation timestamp
|
||||
- `voting_deadline`: End of voting period
|
||||
- `quorum_threshold`: Minimum participation required
|
||||
- `approval_threshold`: Minimum approval required
|
||||
- `executed_at`: Execution timestamp
|
||||
- `rejection_reason`: Reason for rejection
|
||||
|
||||
### ProposalVote
|
||||
- `id`: Unique vote identifier
|
||||
- `proposal_id`: Reference to proposal
|
||||
- `voter_id`: Address of the voter
|
||||
- `vote`: Vote choice (for/against/abstain)
|
||||
- `voting_power`: Power at time of vote
|
||||
- `reason`: Vote reason
|
||||
- `voted_at`: Vote timestamp
|
||||
|
||||
### TreasuryTransaction
|
||||
- `id`: Unique transaction identifier
|
||||
- `proposal_id`: Reference to proposal
|
||||
- `from_address`: Source address
|
||||
- `to_address`: Destination address
|
||||
- `amount`: Transfer amount
|
||||
- `status`: Transaction status
|
||||
- `transaction_hash`: Blockchain hash
|
||||
|
||||
## Security Considerations
|
||||
|
||||
1. **Voting Power**: Based on AITBC token holdings
|
||||
2. **Double Voting**: Prevented by tracking voter addresses
|
||||
3. **Execution Delay**: Prevents rush decisions
|
||||
4. **Quorum Requirements**: Ensures sufficient participation
|
||||
5. **Proposal Thresholds**: Prevents spam proposals
|
||||
|
||||
## Integration Guide
|
||||
|
||||
### Frontend Integration
|
||||
|
||||
```javascript
|
||||
// Fetch proposals
|
||||
const response = await fetch('/api/v1/governance/proposals');
|
||||
const proposals = await response.json();
|
||||
|
||||
// Submit vote
|
||||
await fetch('/api/v1/governance/vote', {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({
|
||||
proposal_id: 'uuid',
|
||||
vote: 'for',
|
||||
reason: 'Support this proposal'
|
||||
})
|
||||
});
|
||||
```
|
||||
|
||||
### Smart Contract Integration
|
||||
|
||||
The governance system can be integrated with smart contracts for:
|
||||
- On-chain voting
|
||||
- Automatic execution
|
||||
- Treasury management
|
||||
- Parameter enforcement
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Clear Proposals**: Provide detailed descriptions and rationales
|
||||
2. **Reasonable Thresholds**: Set achievable quorum and approval thresholds
|
||||
3. **Community Discussion**: Use forums for proposal discussion
|
||||
4. **Gradual Changes**: Implement major changes in phases
|
||||
5. **Monitoring**: Track proposal outcomes and system impact
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
1. **Delegated Voting**: Allow users to delegate voting power
|
||||
2. **Quadratic Voting**: Implement more sophisticated voting mechanisms
|
||||
3. **Time-locked Voting**: Lock tokens for voting power boosts
|
||||
4. **Multi-sig Execution**: Require multiple signatures for execution
|
||||
5. **Proposal Templates**: Standardize proposal formats
|
||||
|
||||
## Support
|
||||
|
||||
For governance-related questions:
|
||||
- Check the API documentation
|
||||
- Review proposal history
|
||||
- Contact the governance team
|
||||
- Participate in community discussions
|
||||
@@ -1,204 +0,0 @@
|
||||
# AITBC Roadmap Retrospective - [Period]
|
||||
|
||||
**Date**: [Date]
|
||||
**Period**: [e.g., H1 2024, H2 2024]
|
||||
**Authors**: AITBC Core Team
|
||||
|
||||
## Executive Summary
|
||||
|
||||
[Brief 2-3 sentence summary of the period's achievements and challenges]
|
||||
|
||||
## KPI Performance Review
|
||||
|
||||
### Key Metrics
|
||||
|
||||
| KPI | Target | Actual | Status | Notes |
|
||||
|-----|--------|--------|--------|-------|
|
||||
| Active Marketplaces | [target] | [actual] | ✅/⚠️/❌ | [comments] |
|
||||
| Cross-Chain Volume | [target] | [actual] | ✅/⚠️/❌ | [comments] |
|
||||
| Active Developers | [target] | [actual] | ✅/⚠️/❌ | [comments] |
|
||||
| TVL (Total Value Locked) | [target] | [actual] | ✅/⚠️/❌ | [comments] |
|
||||
| Transaction Volume | [target] | [actual] | ✅/⚠️/❌ | [comments] |
|
||||
|
||||
### Performance Analysis
|
||||
|
||||
#### Achievements
|
||||
- [List 3-5 major achievements]
|
||||
- [Include metrics and impact]
|
||||
|
||||
#### Challenges
|
||||
- [List 2-3 key challenges]
|
||||
- [Include root causes if known]
|
||||
|
||||
#### Learnings
|
||||
- [Key insights from the period]
|
||||
- [What worked well]
|
||||
- [What didn't work as expected]
|
||||
|
||||
## Roadmap Progress
|
||||
|
||||
### Completed Items
|
||||
|
||||
#### Stage 7 - Community & Governance
|
||||
- ✅ [Item] - [Date completed] - [Brief description]
|
||||
- ✅ [Item] - [Date completed] - [Brief description]
|
||||
|
||||
#### Stage 8 - Frontier R&D & Global Expansion
|
||||
- ✅ [Item] - [Date completed] - [Brief description]
|
||||
- ✅ [Item] - [Date completed] - [Brief description]
|
||||
|
||||
### In Progress Items
|
||||
|
||||
#### [Stage Name]
|
||||
- ⏳ [Item] - [Progress %] - [ETA] - [Blockers if any]
|
||||
- ⏳ [Item] - [Progress %] - [ETA] - [Blockers if any]
|
||||
|
||||
### Delayed Items
|
||||
|
||||
#### [Stage Name]
|
||||
- ⏸️ [Item] - [Original date] → [New date] - [Reason for delay]
|
||||
- ⏸️ [Item] - [Original date] → [New date] - [Reason for delay]
|
||||
|
||||
### New Items Added
|
||||
|
||||
- 🆕 [Item] - [Added date] - [Priority] - [Rationale]
|
||||
|
||||
## Ecosystem Health
|
||||
|
||||
### Developer Ecosystem
|
||||
- **New Developers**: [number]
|
||||
- **Active Projects**: [number]
|
||||
- **GitHub Stars**: [number]
|
||||
- **Community Engagement**: [description]
|
||||
|
||||
### User Adoption
|
||||
- **Active Users**: [number]
|
||||
- **Transaction Growth**: [percentage]
|
||||
- **Geographic Distribution**: [key regions]
|
||||
|
||||
### Partner Ecosystem
|
||||
- **New Partners**: [number]
|
||||
- **Integration Status**: [description]
|
||||
- **Success Stories**: [1-2 examples]
|
||||
|
||||
## Technical Achievements
|
||||
|
||||
### Major Releases
|
||||
- [Release Name] - [Date] - [Key features]
|
||||
- [Release Name] - [Date] - [Key features]
|
||||
|
||||
### Research Outcomes
|
||||
- [Paper/Prototype] - [Status] - [Impact]
|
||||
- [Research Area] - [Findings] - [Next steps]
|
||||
|
||||
### Infrastructure Improvements
|
||||
- [Improvement] - [Impact] - [Metrics]
|
||||
|
||||
## Community & Governance
|
||||
|
||||
### Governance Participation
|
||||
- **Proposal Submissions**: [number]
|
||||
- **Voting Turnout**: [percentage]
|
||||
- **Community Discussions**: [key topics]
|
||||
|
||||
### Community Initiatives
|
||||
- [Initiative] - [Participation] - [Outcomes]
|
||||
- [Initiative] - [Participation] - [Outcomes]
|
||||
|
||||
### Events & Activities
|
||||
- [Event] - [Attendance] - [Feedback]
|
||||
- [Event] - [Attendance] - [Feedback]
|
||||
|
||||
## Financial Overview
|
||||
|
||||
### Treasury Status
|
||||
- **Balance**: [amount]
|
||||
- **Burn Rate**: [amount/month]
|
||||
- **Runway**: [months]
|
||||
|
||||
### Grant Program
|
||||
- **Grants Awarded**: [number]
|
||||
- **Total Amount**: [amount]
|
||||
- **Success Rate**: [percentage]
|
||||
|
||||
## Risk Assessment
|
||||
|
||||
### Technical Risks
|
||||
- [Risk] - [Probability] - [Impact] - [Mitigation]
|
||||
|
||||
### Market Risks
|
||||
- [Risk] - [Probability] - [Impact] - [Mitigation]
|
||||
|
||||
### Operational Risks
|
||||
- [Risk] - [Probability] - [Impact] - [Mitigation]
|
||||
|
||||
## Next Period Goals
|
||||
|
||||
### Primary Objectives
|
||||
1. [Objective] - [Success criteria]
|
||||
2. [Objective] - [Success criteria]
|
||||
3. [Objective] - [Success criteria]
|
||||
|
||||
### Key Initiatives
|
||||
- [Initiative] - [Owner] - [Timeline]
|
||||
- [Initiative] - [Owner] - [Timeline]
|
||||
- [Initiative] - [Owner] - [Timeline]
|
||||
|
||||
### Resource Requirements
|
||||
- **Team**: [needs]
|
||||
- **Budget**: [amount]
|
||||
- **Partnerships**: [requirements]
|
||||
|
||||
## Long-term Vision Updates
|
||||
|
||||
### Strategy Adjustments
|
||||
- [Adjustment] - [Rationale] - [Expected impact]
|
||||
|
||||
### New Opportunities
|
||||
- [Opportunity] - [Potential] - [Next steps]
|
||||
|
||||
### Timeline Revisions
|
||||
- [Milestone] - [Original] → [Revised] - [Reason]
|
||||
|
||||
## Feedback & Suggestions
|
||||
|
||||
### Community Feedback
|
||||
- [Summary of key feedback]
|
||||
- [Action items]
|
||||
|
||||
### Partner Feedback
|
||||
- [Summary of key feedback]
|
||||
- [Action items]
|
||||
|
||||
### Internal Feedback
|
||||
- [Summary of key feedback]
|
||||
- [Action items]
|
||||
|
||||
## Appendices
|
||||
|
||||
### A. Detailed Metrics
|
||||
[Additional charts and data]
|
||||
|
||||
### B. Project Timeline
|
||||
[Visual timeline with dependencies]
|
||||
|
||||
### C. Risk Register
|
||||
[Detailed risk matrix]
|
||||
|
||||
### D. Action Item Tracker
|
||||
[List of action items with owners and due dates]
|
||||
|
||||
---
|
||||
|
||||
**Next Review Date**: [Date]
|
||||
**Document Version**: [version]
|
||||
**Distribution**: [list of recipients]
|
||||
|
||||
## Approval
|
||||
|
||||
| Role | Name | Signature | Date |
|
||||
|------|------|-----------|------|
|
||||
| Project Lead | | | |
|
||||
| Tech Lead | | | |
|
||||
| Community Lead | | | |
|
||||
| Ecosystem Lead | | | |
|
||||
@@ -1,271 +0,0 @@
|
||||
# AITBC Annual Transparency Report - [Year]
|
||||
|
||||
**Published**: [Date]
|
||||
**Reporting Period**: [Start Date] to [End Date]
|
||||
**Prepared By**: AITBC Foundation
|
||||
|
||||
## Executive Summary
|
||||
|
||||
[2-3 paragraph summary of the year's achievements, challenges, and strategic direction]
|
||||
|
||||
## Mission & Vision Alignment
|
||||
|
||||
### Mission Progress
|
||||
- [Progress towards decentralizing AI/ML marketplace]
|
||||
- [Key metrics showing mission advancement]
|
||||
- [Community impact stories]
|
||||
|
||||
### Vision Milestones
|
||||
- [Technical milestones achieved]
|
||||
- [Ecosystem growth metrics]
|
||||
- [Strategic partnerships formed]
|
||||
|
||||
## Governance Transparency
|
||||
|
||||
### Governance Structure
|
||||
- **Current Model**: [Description of governance model]
|
||||
- **Decision Making Process**: [How decisions are made]
|
||||
- **Community Participation**: [Governance participation metrics]
|
||||
|
||||
### Key Governance Actions
|
||||
| Date | Action | Outcome | Community Feedback |
|
||||
|------|--------|---------|-------------------|
|
||||
| [Date] | [Proposal/Decision] | [Result] | [Summary] |
|
||||
| [Date] | [Proposal/Decision] | [Result] | [Summary] |
|
||||
|
||||
### Treasury & Financial Transparency
|
||||
- **Total Treasury**: [Amount] AITBC
|
||||
- **Annual Expenditure**: [Amount] AITBC
|
||||
- **Funding Sources**: [Breakdown]
|
||||
- **Expense Categories**: [Breakdown]
|
||||
|
||||
#### Budget Allocation
|
||||
| Category | Budgeted | Actual | Variance | Notes |
|
||||
|----------|----------|--------|----------|-------|
|
||||
| Development | [Amount] | [Amount] | [Amount] | [Explanation] |
|
||||
| Operations | [Amount] | [Amount] | [Amount] | [Explanation] |
|
||||
| Community | [Amount] | [Amount] | [Amount] | [Explanation] |
|
||||
| Research | [Amount] | [Amount] | [Amount] | [Explanation] |
|
||||
|
||||
## Technical Development
|
||||
|
||||
### Protocol Updates
|
||||
#### Major Releases
|
||||
- [Version] - [Date] - [Key Features]
|
||||
- [Version] - [Date] - [Key Features]
|
||||
- [Version] - [Date] - [Key Features]
|
||||
|
||||
#### Research & Development
|
||||
- **Research Papers Published**: [Number]
|
||||
- **Prototypes Developed**: [Number]
|
||||
- **Patents Filed**: [Number]
|
||||
- **Open Source Contributions**: [Details]
|
||||
|
||||
### Security & Reliability
|
||||
- **Security Audits**: [Number] completed
|
||||
- **Critical Issues**: [Number] found and fixed
|
||||
- **Uptime**: [Percentage]
|
||||
- **Incidents**: [Number] with details
|
||||
|
||||
### Performance Metrics
|
||||
| Metric | Target | Actual | Status |
|
||||
|--------|--------|--------|--------|
|
||||
| TPS | [Target] | [Actual] | ✅/⚠️/❌ |
|
||||
| Block Time | [Target] | [Actual] | ✅/⚠️/❌ |
|
||||
| Finality | [Target] | [Actual] | ✅/⚠️/❌ |
|
||||
| Gas Efficiency | [Target] | [Actual] | ✅/⚠️/❌ |
|
||||
|
||||
## Ecosystem Health
|
||||
|
||||
### Network Statistics
|
||||
- **Total Transactions**: [Number]
|
||||
- **Active Addresses**: [Number]
|
||||
- **Total Value Locked (TVL)**: [Amount]
|
||||
- **Cross-Chain Volume**: [Amount]
|
||||
- **Marketplaces**: [Number]
|
||||
|
||||
### Developer Ecosystem
|
||||
- **Active Developers**: [Number]
|
||||
- **Projects Built**: [Number]
|
||||
- **GitHub Stars**: [Number]
|
||||
- **Developer Grants Awarded**: [Number]
|
||||
|
||||
### Community Metrics
|
||||
- **Community Members**: [Discord/Telegram/etc.]
|
||||
- **Monthly Active Users**: [Number]
|
||||
- **Social Media Engagement**: [Metrics]
|
||||
- **Event Participation**: [Number of events, attendance]
|
||||
|
||||
### Geographic Distribution
|
||||
| Region | Users | Developers | Partners | Growth |
|
||||
|--------|-------|------------|----------|--------|
|
||||
| North America | [Number] | [Number] | [Number] | [%] |
|
||||
| Europe | [Number] | [Number] | [Number] | [%] |
|
||||
| Asia Pacific | [Number] | [Number] | [Number] | [%] |
|
||||
| Other | [Number] | [Number] | [Number] | [%] |
|
||||
|
||||
## Sustainability Metrics
|
||||
|
||||
### Environmental Impact
|
||||
- **Energy Consumption**: [kWh/year]
|
||||
- **Carbon Footprint**: [tCO2/year]
|
||||
- **Renewable Energy Usage**: [Percentage]
|
||||
- **Efficiency Improvements**: [Year-over-year change]
|
||||
|
||||
### Economic Sustainability
|
||||
- **Revenue Streams**: [Breakdown]
|
||||
- **Cost Optimization**: [Achievements]
|
||||
- **Long-term Funding**: [Strategy]
|
||||
- **Risk Management**: [Approach]
|
||||
|
||||
### Social Impact
|
||||
- **Education Programs**: [Number of participants]
|
||||
- **Accessibility Features**: [Improvements]
|
||||
- **Inclusion Initiatives**: [Programs launched]
|
||||
- **Community Benefits**: [Stories/examples]
|
||||
|
||||
## Partnerships & Collaborations
|
||||
|
||||
### Strategic Partners
|
||||
| Partner | Type | Since | Key Achievements |
|
||||
|---------|------|-------|-----------------|
|
||||
| [Partner] | [Type] | [Year] | [Achievements] |
|
||||
| [Partner] | [Type] | [Year] | [Achievements] |
|
||||
|
||||
### Academic Collaborations
|
||||
- **University Partnerships**: [Number]
|
||||
- **Research Projects**: [Number]
|
||||
- **Student Programs**: [Participants]
|
||||
- **Publications**: [Number]
|
||||
|
||||
### Industry Alliances
|
||||
- **Consortium Members**: [Number]
|
||||
- **Working Groups**: [Active groups]
|
||||
- **Standardization Efforts**: [Contributions]
|
||||
- **Joint Initiatives**: [Projects]
|
||||
|
||||
## Compliance & Legal
|
||||
|
||||
### Regulatory Compliance
|
||||
- **Jurisdictions**: [Countries/regions of operation]
|
||||
- **Licenses**: [Held licenses]
|
||||
- **Compliance Programs**: [Active programs]
|
||||
- **Audits**: [Results]
|
||||
|
||||
### Data Privacy
|
||||
- **Privacy Policy Updates**: [Changes made]
|
||||
- **Data Protection**: [Measures implemented]
|
||||
- **User Rights**: [Enhancements]
|
||||
- **Incidents**: [Any breaches/issues]
|
||||
|
||||
### Intellectual Property
|
||||
- **Patents**: [Portfolio summary]
|
||||
- **Trademarks**: [Registered marks]
|
||||
- **Open Source**: [Licenses used]
|
||||
- **Contributions**: [Policy]
|
||||
|
||||
## Risk Management
|
||||
|
||||
### Identified Risks
|
||||
| Risk Category | Risk Level | Mitigation | Status |
|
||||
|---------------|------------|------------|--------|
|
||||
| Technical | [Level] | [Strategy] | [Status] |
|
||||
| Market | [Level] | [Strategy] | [Status] |
|
||||
| Regulatory | [Level] | [Strategy] | [Status] |
|
||||
| Operational | [Level] | [Strategy] | [Status] |
|
||||
|
||||
### Incident Response
|
||||
- **Security Incidents**: [Number] with details
|
||||
- **Response Time**: [Average time]
|
||||
- **Recovery Time**: [Average time]
|
||||
- **Lessons Learned**: [Key takeaways]
|
||||
|
||||
## Community Feedback & Engagement
|
||||
|
||||
### Feedback Channels
|
||||
- **Proposals Received**: [Number]
|
||||
- **Community Votes**: [Number]
|
||||
- **Feedback Implementation Rate**: [Percentage]
|
||||
- **Response Time**: [Average time]
|
||||
|
||||
### Major Community Initiatives
|
||||
- [Initiative 1] - [Participation] - [Outcome]
|
||||
- [Initiative 2] - [Participation] - [Outcome]
|
||||
- [Initiative 3] - [Participation] - [Outcome]
|
||||
|
||||
### Challenges & Concerns
|
||||
- **Top Issues Raised**: [Summary]
|
||||
- **Actions Taken**: [Responses]
|
||||
- **Ongoing Concerns**: [Status]
|
||||
|
||||
## Future Outlook
|
||||
|
||||
### Next Year Goals
|
||||
1. [Goal 1] - [Success criteria]
|
||||
2. [Goal 2] - [Success criteria]
|
||||
3. [Goal 3] - [Success criteria]
|
||||
|
||||
### Strategic Priorities
|
||||
- [Priority 1] - [Rationale]
|
||||
- [Priority 2] - [Rationale]
|
||||
- [Priority 3] - [Rationale]
|
||||
|
||||
### Resource Allocation
|
||||
- **Development**: [Planned investment]
|
||||
- **Community**: [Planned investment]
|
||||
- **Research**: [Planned investment]
|
||||
- **Operations**: [Planned investment]
|
||||
|
||||
## Acknowledgments
|
||||
|
||||
### Contributors
|
||||
- **Core Team**: [Number of contributors]
|
||||
- **Community Contributors**: [Number]
|
||||
- **Top Contributors**: [Recognition]
|
||||
|
||||
### Special Thanks
|
||||
- [Individual/Organization 1]
|
||||
- [Individual/Organization 2]
|
||||
- [Individual/Organization 3]
|
||||
|
||||
## Appendices
|
||||
|
||||
### A. Detailed Financial Statements
|
||||
[Link to detailed financial reports]
|
||||
|
||||
### B. Technical Specifications
|
||||
[Link to technical documentation]
|
||||
|
||||
### C. Governance Records
|
||||
[Link to governance documentation]
|
||||
|
||||
### D. Community Survey Results
|
||||
[Key findings from community surveys]
|
||||
|
||||
### E. Third-Party Audits
|
||||
[Links to audit reports]
|
||||
|
||||
---
|
||||
|
||||
## Contact & Verification
|
||||
|
||||
### Verification
|
||||
- **Financial Audit**: [Auditor] - [Report link]
|
||||
- **Technical Audit**: [Auditor] - [Report link]
|
||||
- **Security Audit**: [Auditor] - [Report link]
|
||||
|
||||
### Contact Information
|
||||
- **Transparency Questions**: transparency@aitbc.io
|
||||
- **General Inquiries**: info@aitbc.io
|
||||
- **Security Issues**: security@aitbc.io
|
||||
- **Media Inquiries**: media@aitbc.io
|
||||
|
||||
### Document Information
|
||||
- **Version**: [Version number]
|
||||
- **Last Updated**: [Date]
|
||||
- **Next Report Due**: [Date]
|
||||
- **Archive**: [Link to past reports]
|
||||
|
||||
---
|
||||
|
||||
*This transparency report is published annually as part of AITBC's commitment to openness and accountability. All data presented is accurate to the best of our knowledge. For questions or clarifications, please contact us at transparency@aitbc.io.*
|
||||
@@ -1,45 +0,0 @@
|
||||
# AITBC Reference Documentation
|
||||
|
||||
Welcome to the AITBC reference documentation. This section contains technical specifications, architecture details, and historical documentation.
|
||||
|
||||
## Architecture & Design
|
||||
|
||||
- [Architecture Overview](architecture/) - System architecture documentation
|
||||
- [Cross-Chain Settlement](architecture/cross-chain-settlement-design.md) - Cross-chain settlement design
|
||||
- [Python SDK Transport](architecture/python-sdk-transport-design.md) - Transport abstraction design
|
||||
|
||||
## Bootstrap Specifications
|
||||
|
||||
- [Bootstrap Directory](bootstrap/dirs.md) - Original directory structure
|
||||
- [Technical Plan](bootstrap/aitbc_tech_plan.md) - Original technical specification
|
||||
- [Component Specs](bootstrap/) - Individual component specifications
|
||||
|
||||
## Cryptography & Privacy
|
||||
|
||||
- [ZK Receipt Attestation](zk-receipt-attestation.md) - Zero-knowledge proof implementation
|
||||
- [ZK Implementation Summary](zk-implementation-summary.md) - ZK implementation overview
|
||||
- [ZK Technology Comparison](zk-technology-comparison.md) - ZK technology comparison
|
||||
- [Confidential Transactions](confidential-transactions.md) - Confidential transaction implementation
|
||||
- [Confidential Implementation Summary](confidential-implementation-summary.md) - Implementation summary
|
||||
- [Threat Modeling](threat-modeling.md) - Security threat modeling
|
||||
|
||||
## Enterprise Features
|
||||
|
||||
- [Enterprise SLA](enterprise-sla.md) - Service level agreements
|
||||
- [Multi-tenancy](multi-tenancy.md) - Multi-tenant infrastructure
|
||||
- [HSM Integration](hsm-integration.md) - Hardware security module integration
|
||||
|
||||
## Project Documentation
|
||||
|
||||
- [Roadmap](../roadmap.md) - Development roadmap
|
||||
- [Completed Tasks](../done.md) - List of completed features
|
||||
- [Beta Release Plan](beta-release-plan.md) - Beta release planning
|
||||
|
||||
## Historical
|
||||
|
||||
- [Component Documentation](../coordinator_api.md) - Historical component docs
|
||||
- [Bootstrap Archive](bootstrap/) - Original bootstrap documentation
|
||||
|
||||
## Glossary
|
||||
|
||||
- [Terms](glossary.md) - AITBC terminology and definitions
|
||||
@@ -1,516 +0,0 @@
|
||||
# AITBC API Reference (OpenAPI)
|
||||
|
||||
This document provides the complete API reference for the AITBC Coordinator API.
|
||||
|
||||
## Base URL
|
||||
|
||||
```
|
||||
Production: https://aitbc.bubuit.net/api
|
||||
Local: http://127.0.0.1:8001
|
||||
```
|
||||
|
||||
## Authentication
|
||||
|
||||
Most endpoints require an API key passed in the header:
|
||||
|
||||
```
|
||||
X-Api-Key: your-api-key
|
||||
```
|
||||
|
||||
## OpenAPI Specification
|
||||
|
||||
```yaml
|
||||
openapi: 3.0.3
|
||||
info:
|
||||
title: AITBC Coordinator API
|
||||
version: 1.0.0
|
||||
description: API for submitting AI compute jobs and managing the AITBC network
|
||||
|
||||
servers:
|
||||
- url: https://aitbc.bubuit.net/api
|
||||
description: Production
|
||||
- url: http://127.0.0.1:8001
|
||||
description: Local development
|
||||
|
||||
paths:
|
||||
/health:
|
||||
get:
|
||||
summary: Health check
|
||||
tags: [System]
|
||||
responses:
|
||||
'200':
|
||||
description: Service is healthy
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
type: object
|
||||
properties:
|
||||
status:
|
||||
type: string
|
||||
example: ok
|
||||
version:
|
||||
type: string
|
||||
example: 1.0.0
|
||||
|
||||
/v1/jobs:
|
||||
post:
|
||||
summary: Submit a new job
|
||||
tags: [Jobs]
|
||||
security:
|
||||
- ApiKey: []
|
||||
requestBody:
|
||||
required: true
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/JobRequest'
|
||||
responses:
|
||||
'201':
|
||||
description: Job created
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/Job'
|
||||
'400':
|
||||
$ref: '#/components/responses/BadRequest'
|
||||
'401':
|
||||
$ref: '#/components/responses/Unauthorized'
|
||||
|
||||
get:
|
||||
summary: List jobs
|
||||
tags: [Jobs]
|
||||
parameters:
|
||||
- name: status
|
||||
in: query
|
||||
schema:
|
||||
type: string
|
||||
enum: [pending, running, completed, failed, cancelled]
|
||||
- name: limit
|
||||
in: query
|
||||
schema:
|
||||
type: integer
|
||||
default: 20
|
||||
maximum: 100
|
||||
- name: offset
|
||||
in: query
|
||||
schema:
|
||||
type: integer
|
||||
default: 0
|
||||
responses:
|
||||
'200':
|
||||
description: List of jobs
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
type: array
|
||||
items:
|
||||
$ref: '#/components/schemas/Job'
|
||||
|
||||
/v1/jobs/{job_id}:
|
||||
get:
|
||||
summary: Get job details
|
||||
tags: [Jobs]
|
||||
parameters:
|
||||
- name: job_id
|
||||
in: path
|
||||
required: true
|
||||
schema:
|
||||
type: string
|
||||
responses:
|
||||
'200':
|
||||
description: Job details
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/Job'
|
||||
'404':
|
||||
$ref: '#/components/responses/NotFound'
|
||||
|
||||
/v1/jobs/{job_id}/cancel:
|
||||
post:
|
||||
summary: Cancel a job
|
||||
tags: [Jobs]
|
||||
parameters:
|
||||
- name: job_id
|
||||
in: path
|
||||
required: true
|
||||
schema:
|
||||
type: string
|
||||
responses:
|
||||
'200':
|
||||
description: Job cancelled
|
||||
'404':
|
||||
$ref: '#/components/responses/NotFound'
|
||||
'409':
|
||||
description: Job cannot be cancelled (already completed)
|
||||
|
||||
/v1/jobs/available:
|
||||
get:
|
||||
summary: Get available jobs for miners
|
||||
tags: [Miners]
|
||||
parameters:
|
||||
- name: miner_id
|
||||
in: query
|
||||
required: true
|
||||
schema:
|
||||
type: string
|
||||
responses:
|
||||
'200':
|
||||
description: Available job or null
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/Job'
|
||||
|
||||
/v1/jobs/{job_id}/claim:
|
||||
post:
|
||||
summary: Claim a job for processing
|
||||
tags: [Miners]
|
||||
parameters:
|
||||
- name: job_id
|
||||
in: path
|
||||
required: true
|
||||
schema:
|
||||
type: string
|
||||
requestBody:
|
||||
required: true
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
type: object
|
||||
properties:
|
||||
miner_id:
|
||||
type: string
|
||||
required:
|
||||
- miner_id
|
||||
responses:
|
||||
'200':
|
||||
description: Job claimed
|
||||
'409':
|
||||
description: Job already claimed
|
||||
|
||||
/v1/jobs/{job_id}/complete:
|
||||
post:
|
||||
summary: Submit job result
|
||||
tags: [Miners]
|
||||
parameters:
|
||||
- name: job_id
|
||||
in: path
|
||||
required: true
|
||||
schema:
|
||||
type: string
|
||||
requestBody:
|
||||
required: true
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/JobResult'
|
||||
responses:
|
||||
'200':
|
||||
description: Job completed
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/Receipt'
|
||||
|
||||
/v1/miners/register:
|
||||
post:
|
||||
summary: Register a miner
|
||||
tags: [Miners]
|
||||
requestBody:
|
||||
required: true
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/MinerRegistration'
|
||||
responses:
|
||||
'201':
|
||||
description: Miner registered
|
||||
'400':
|
||||
$ref: '#/components/responses/BadRequest'
|
||||
|
||||
/v1/receipts:
|
||||
get:
|
||||
summary: List receipts
|
||||
tags: [Receipts]
|
||||
parameters:
|
||||
- name: client
|
||||
in: query
|
||||
schema:
|
||||
type: string
|
||||
- name: provider
|
||||
in: query
|
||||
schema:
|
||||
type: string
|
||||
- name: limit
|
||||
in: query
|
||||
schema:
|
||||
type: integer
|
||||
default: 20
|
||||
responses:
|
||||
'200':
|
||||
description: List of receipts
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
type: array
|
||||
items:
|
||||
$ref: '#/components/schemas/Receipt'
|
||||
|
||||
/v1/receipts/{receipt_id}:
|
||||
get:
|
||||
summary: Get receipt details
|
||||
tags: [Receipts]
|
||||
parameters:
|
||||
- name: receipt_id
|
||||
in: path
|
||||
required: true
|
||||
schema:
|
||||
type: string
|
||||
responses:
|
||||
'200':
|
||||
description: Receipt details
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/Receipt'
|
||||
'404':
|
||||
$ref: '#/components/responses/NotFound'
|
||||
|
||||
/explorer/blocks:
|
||||
get:
|
||||
summary: Get recent blocks
|
||||
tags: [Explorer]
|
||||
parameters:
|
||||
- name: limit
|
||||
in: query
|
||||
schema:
|
||||
type: integer
|
||||
default: 10
|
||||
responses:
|
||||
'200':
|
||||
description: List of blocks
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
type: array
|
||||
items:
|
||||
$ref: '#/components/schemas/Block'
|
||||
|
||||
/explorer/transactions:
|
||||
get:
|
||||
summary: Get recent transactions
|
||||
tags: [Explorer]
|
||||
responses:
|
||||
'200':
|
||||
description: List of transactions
|
||||
|
||||
/explorer/receipts:
|
||||
get:
|
||||
summary: Get recent receipts
|
||||
tags: [Explorer]
|
||||
responses:
|
||||
'200':
|
||||
description: List of receipts
|
||||
|
||||
/explorer/stats:
|
||||
get:
|
||||
summary: Get network statistics
|
||||
tags: [Explorer]
|
||||
responses:
|
||||
'200':
|
||||
description: Network stats
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/NetworkStats'
|
||||
|
||||
components:
|
||||
securitySchemes:
|
||||
ApiKey:
|
||||
type: apiKey
|
||||
in: header
|
||||
name: X-Api-Key
|
||||
|
||||
schemas:
|
||||
JobRequest:
|
||||
type: object
|
||||
properties:
|
||||
prompt:
|
||||
type: string
|
||||
description: Input prompt
|
||||
model:
|
||||
type: string
|
||||
default: llama3.2
|
||||
params:
|
||||
type: object
|
||||
properties:
|
||||
max_tokens:
|
||||
type: integer
|
||||
default: 256
|
||||
temperature:
|
||||
type: number
|
||||
default: 0.7
|
||||
top_p:
|
||||
type: number
|
||||
default: 0.9
|
||||
required:
|
||||
- prompt
|
||||
|
||||
Job:
|
||||
type: object
|
||||
properties:
|
||||
job_id:
|
||||
type: string
|
||||
status:
|
||||
type: string
|
||||
enum: [pending, running, completed, failed, cancelled]
|
||||
prompt:
|
||||
type: string
|
||||
model:
|
||||
type: string
|
||||
result:
|
||||
type: string
|
||||
miner_id:
|
||||
type: string
|
||||
created_at:
|
||||
type: string
|
||||
format: date-time
|
||||
started_at:
|
||||
type: string
|
||||
format: date-time
|
||||
completed_at:
|
||||
type: string
|
||||
format: date-time
|
||||
|
||||
JobResult:
|
||||
type: object
|
||||
properties:
|
||||
miner_id:
|
||||
type: string
|
||||
result:
|
||||
type: string
|
||||
completed_at:
|
||||
type: string
|
||||
format: date-time
|
||||
required:
|
||||
- miner_id
|
||||
- result
|
||||
|
||||
Receipt:
|
||||
type: object
|
||||
properties:
|
||||
receipt_id:
|
||||
type: string
|
||||
job_id:
|
||||
type: string
|
||||
provider:
|
||||
type: string
|
||||
client:
|
||||
type: string
|
||||
units:
|
||||
type: number
|
||||
unit_type:
|
||||
type: string
|
||||
price:
|
||||
type: number
|
||||
model:
|
||||
type: string
|
||||
started_at:
|
||||
type: integer
|
||||
completed_at:
|
||||
type: integer
|
||||
signature:
|
||||
$ref: '#/components/schemas/Signature'
|
||||
|
||||
Signature:
|
||||
type: object
|
||||
properties:
|
||||
alg:
|
||||
type: string
|
||||
key_id:
|
||||
type: string
|
||||
sig:
|
||||
type: string
|
||||
|
||||
MinerRegistration:
|
||||
type: object
|
||||
properties:
|
||||
miner_id:
|
||||
type: string
|
||||
capabilities:
|
||||
type: array
|
||||
items:
|
||||
type: string
|
||||
gpu_info:
|
||||
type: object
|
||||
properties:
|
||||
name:
|
||||
type: string
|
||||
memory:
|
||||
type: string
|
||||
required:
|
||||
- miner_id
|
||||
- capabilities
|
||||
|
||||
Block:
|
||||
type: object
|
||||
properties:
|
||||
height:
|
||||
type: integer
|
||||
hash:
|
||||
type: string
|
||||
timestamp:
|
||||
type: string
|
||||
format: date-time
|
||||
transactions:
|
||||
type: integer
|
||||
|
||||
NetworkStats:
|
||||
type: object
|
||||
properties:
|
||||
total_jobs:
|
||||
type: integer
|
||||
active_miners:
|
||||
type: integer
|
||||
total_receipts:
|
||||
type: integer
|
||||
block_height:
|
||||
type: integer
|
||||
|
||||
responses:
|
||||
BadRequest:
|
||||
description: Invalid request
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/Error'
|
||||
|
||||
Unauthorized:
|
||||
description: Authentication required
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/Error'
|
||||
|
||||
NotFound:
|
||||
description: Resource not found
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/Error'
|
||||
|
||||
Error:
|
||||
type: object
|
||||
properties:
|
||||
detail:
|
||||
type: string
|
||||
error_code:
|
||||
type: string
|
||||
```
|
||||
|
||||
## Interactive Documentation
|
||||
|
||||
Access the interactive API documentation at:
|
||||
- **Swagger UI**: https://aitbc.bubuit.net/api/docs
|
||||
- **ReDoc**: https://aitbc.bubuit.net/api/redoc
|
||||
@@ -1,269 +0,0 @@
|
||||
# Error Codes and Handling
|
||||
|
||||
This document defines all error codes used by the AITBC API and how to handle them.
|
||||
|
||||
## Error Response Format
|
||||
|
||||
All API errors follow this format:
|
||||
|
||||
```json
|
||||
{
|
||||
"detail": "Human-readable error message",
|
||||
"error_code": "ERROR_CODE",
|
||||
"request_id": "req-abc123"
|
||||
}
|
||||
```
|
||||
|
||||
| Field | Type | Description |
|
||||
|-------|------|-------------|
|
||||
| `detail` | string | Human-readable description |
|
||||
| `error_code` | string | Machine-readable error code |
|
||||
| `request_id` | string | Request identifier for debugging |
|
||||
|
||||
## HTTP Status Codes
|
||||
|
||||
| Code | Meaning | When Used |
|
||||
|------|---------|-----------|
|
||||
| 200 | OK | Successful GET, POST (update) |
|
||||
| 201 | Created | Successful POST (create) |
|
||||
| 204 | No Content | Successful DELETE |
|
||||
| 400 | Bad Request | Invalid input |
|
||||
| 401 | Unauthorized | Missing/invalid authentication |
|
||||
| 403 | Forbidden | Insufficient permissions |
|
||||
| 404 | Not Found | Resource doesn't exist |
|
||||
| 409 | Conflict | Resource state conflict |
|
||||
| 422 | Unprocessable Entity | Validation error |
|
||||
| 429 | Too Many Requests | Rate limited |
|
||||
| 500 | Internal Server Error | Server error |
|
||||
| 502 | Bad Gateway | Upstream service error |
|
||||
| 503 | Service Unavailable | Maintenance/overload |
|
||||
|
||||
## Error Codes by Category
|
||||
|
||||
### Authentication Errors (AUTH_*)
|
||||
|
||||
| Code | HTTP | Description | Resolution |
|
||||
|------|------|-------------|------------|
|
||||
| `AUTH_MISSING_KEY` | 401 | No API key provided | Include `X-Api-Key` header |
|
||||
| `AUTH_INVALID_KEY` | 401 | API key is invalid | Check API key is correct |
|
||||
| `AUTH_EXPIRED_KEY` | 401 | API key has expired | Generate new API key |
|
||||
| `AUTH_INSUFFICIENT_SCOPE` | 403 | Key lacks required permissions | Use key with correct scope |
|
||||
|
||||
**Example:**
|
||||
```json
|
||||
{
|
||||
"detail": "API key is required for this endpoint",
|
||||
"error_code": "AUTH_MISSING_KEY"
|
||||
}
|
||||
```
|
||||
|
||||
### Job Errors (JOB_*)
|
||||
|
||||
| Code | HTTP | Description | Resolution |
|
||||
|------|------|-------------|------------|
|
||||
| `JOB_NOT_FOUND` | 404 | Job doesn't exist | Check job ID |
|
||||
| `JOB_ALREADY_CLAIMED` | 409 | Job claimed by another miner | Request different job |
|
||||
| `JOB_ALREADY_COMPLETED` | 409 | Job already finished | No action needed |
|
||||
| `JOB_ALREADY_CANCELLED` | 409 | Job was cancelled | Submit new job |
|
||||
| `JOB_EXPIRED` | 410 | Job deadline passed | Submit new job |
|
||||
| `JOB_INVALID_STATUS` | 400 | Invalid status transition | Check job state |
|
||||
|
||||
**Example:**
|
||||
```json
|
||||
{
|
||||
"detail": "Job job-abc123 not found",
|
||||
"error_code": "JOB_NOT_FOUND"
|
||||
}
|
||||
```
|
||||
|
||||
### Validation Errors (VALIDATION_*)
|
||||
|
||||
| Code | HTTP | Description | Resolution |
|
||||
|------|------|-------------|------------|
|
||||
| `VALIDATION_MISSING_FIELD` | 422 | Required field missing | Include required field |
|
||||
| `VALIDATION_INVALID_TYPE` | 422 | Wrong field type | Use correct type |
|
||||
| `VALIDATION_OUT_OF_RANGE` | 422 | Value outside allowed range | Use value in range |
|
||||
| `VALIDATION_INVALID_FORMAT` | 422 | Wrong format (e.g., date) | Use correct format |
|
||||
| `VALIDATION_PROMPT_TOO_LONG` | 422 | Prompt exceeds limit | Shorten prompt |
|
||||
| `VALIDATION_INVALID_MODEL` | 422 | Model not supported | Use valid model |
|
||||
|
||||
**Example:**
|
||||
```json
|
||||
{
|
||||
"detail": "Field 'prompt' is required",
|
||||
"error_code": "VALIDATION_MISSING_FIELD",
|
||||
"field": "prompt"
|
||||
}
|
||||
```
|
||||
|
||||
### Miner Errors (MINER_*)
|
||||
|
||||
| Code | HTTP | Description | Resolution |
|
||||
|------|------|-------------|------------|
|
||||
| `MINER_NOT_FOUND` | 404 | Miner not registered | Register miner first |
|
||||
| `MINER_ALREADY_REGISTERED` | 409 | Miner ID already exists | Use different ID |
|
||||
| `MINER_OFFLINE` | 503 | Miner not responding | Check miner status |
|
||||
| `MINER_CAPACITY_FULL` | 503 | Miner at max capacity | Wait or use different miner |
|
||||
|
||||
### Receipt Errors (RECEIPT_*)
|
||||
|
||||
| Code | HTTP | Description | Resolution |
|
||||
|------|------|-------------|------------|
|
||||
| `RECEIPT_NOT_FOUND` | 404 | Receipt doesn't exist | Check receipt ID |
|
||||
| `RECEIPT_INVALID_SIGNATURE` | 400 | Signature verification failed | Check receipt integrity |
|
||||
| `RECEIPT_ALREADY_CLAIMED` | 409 | Receipt already processed | No action needed |
|
||||
|
||||
### Rate Limit Errors (RATE_*)
|
||||
|
||||
| Code | HTTP | Description | Resolution |
|
||||
|------|------|-------------|------------|
|
||||
| `RATE_LIMIT_EXCEEDED` | 429 | Too many requests | Wait and retry |
|
||||
| `RATE_QUOTA_EXCEEDED` | 429 | Daily/monthly quota hit | Upgrade plan or wait |
|
||||
|
||||
**Response includes:**
|
||||
```json
|
||||
{
|
||||
"detail": "Rate limit exceeded. Retry after 60 seconds",
|
||||
"error_code": "RATE_LIMIT_EXCEEDED",
|
||||
"retry_after": 60
|
||||
}
|
||||
```
|
||||
|
||||
### Payment Errors (PAYMENT_*)
|
||||
|
||||
| Code | HTTP | Description | Resolution |
|
||||
|------|------|-------------|------------|
|
||||
| `PAYMENT_INSUFFICIENT_BALANCE` | 402 | Not enough AITBC | Top up balance |
|
||||
| `PAYMENT_FAILED` | 500 | Payment processing error | Retry or contact support |
|
||||
|
||||
### System Errors (SYSTEM_*)
|
||||
|
||||
| Code | HTTP | Description | Resolution |
|
||||
|------|------|-------------|------------|
|
||||
| `SYSTEM_INTERNAL_ERROR` | 500 | Unexpected server error | Retry or report bug |
|
||||
| `SYSTEM_MAINTENANCE` | 503 | Scheduled maintenance | Wait for maintenance to end |
|
||||
| `SYSTEM_OVERLOADED` | 503 | System at capacity | Retry with backoff |
|
||||
| `SYSTEM_UPSTREAM_ERROR` | 502 | Dependency failure | Retry later |
|
||||
|
||||
## Error Handling Best Practices
|
||||
|
||||
### Retry Logic
|
||||
|
||||
```python
|
||||
import time
|
||||
import httpx
|
||||
|
||||
def request_with_retry(url, max_retries=3):
|
||||
for attempt in range(max_retries):
|
||||
try:
|
||||
response = httpx.get(url)
|
||||
response.raise_for_status()
|
||||
return response.json()
|
||||
except httpx.HTTPStatusError as e:
|
||||
error = e.response.json()
|
||||
code = error.get("error_code", "")
|
||||
|
||||
# Don't retry client errors (except rate limits)
|
||||
if e.response.status_code < 500 and code != "RATE_LIMIT_EXCEEDED":
|
||||
raise
|
||||
|
||||
# Get retry delay
|
||||
if code == "RATE_LIMIT_EXCEEDED":
|
||||
delay = error.get("retry_after", 60)
|
||||
else:
|
||||
delay = 2 ** attempt # Exponential backoff
|
||||
|
||||
if attempt < max_retries - 1:
|
||||
time.sleep(delay)
|
||||
else:
|
||||
raise
|
||||
```
|
||||
|
||||
### JavaScript Error Handling
|
||||
|
||||
```javascript
|
||||
async function apiRequest(url, options = {}) {
|
||||
const response = await fetch(url, options);
|
||||
|
||||
if (!response.ok) {
|
||||
const error = await response.json();
|
||||
|
||||
switch (error.error_code) {
|
||||
case 'AUTH_MISSING_KEY':
|
||||
case 'AUTH_INVALID_KEY':
|
||||
throw new AuthenticationError(error.detail);
|
||||
|
||||
case 'RATE_LIMIT_EXCEEDED':
|
||||
const retryAfter = error.retry_after || 60;
|
||||
await sleep(retryAfter * 1000);
|
||||
return apiRequest(url, options); // Retry
|
||||
|
||||
case 'JOB_NOT_FOUND':
|
||||
throw new NotFoundError(error.detail);
|
||||
|
||||
default:
|
||||
throw new APIError(error.detail, error.error_code);
|
||||
}
|
||||
}
|
||||
|
||||
return response.json();
|
||||
}
|
||||
```
|
||||
|
||||
### Logging Errors
|
||||
|
||||
Always log the `request_id` for debugging:
|
||||
|
||||
```python
|
||||
import logging
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
try:
|
||||
result = api_call()
|
||||
except APIError as e:
|
||||
logger.error(
|
||||
"API error",
|
||||
extra={
|
||||
"error_code": e.error_code,
|
||||
"detail": e.detail,
|
||||
"request_id": e.request_id
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
## Reporting Issues
|
||||
|
||||
When reporting errors to support, include:
|
||||
1. **Error code** and message
|
||||
2. **Request ID** (from response)
|
||||
3. **Timestamp** of the error
|
||||
4. **Request details** (endpoint, parameters)
|
||||
5. **Steps to reproduce**
|
||||
|
||||
## Error Code Reference Table
|
||||
|
||||
| Code | HTTP | Category | Retryable |
|
||||
|------|------|----------|-----------|
|
||||
| `AUTH_MISSING_KEY` | 401 | Auth | No |
|
||||
| `AUTH_INVALID_KEY` | 401 | Auth | No |
|
||||
| `AUTH_EXPIRED_KEY` | 401 | Auth | No |
|
||||
| `AUTH_INSUFFICIENT_SCOPE` | 403 | Auth | No |
|
||||
| `JOB_NOT_FOUND` | 404 | Job | No |
|
||||
| `JOB_ALREADY_CLAIMED` | 409 | Job | No |
|
||||
| `JOB_ALREADY_COMPLETED` | 409 | Job | No |
|
||||
| `JOB_ALREADY_CANCELLED` | 409 | Job | No |
|
||||
| `JOB_EXPIRED` | 410 | Job | No |
|
||||
| `VALIDATION_MISSING_FIELD` | 422 | Validation | No |
|
||||
| `VALIDATION_INVALID_TYPE` | 422 | Validation | No |
|
||||
| `VALIDATION_PROMPT_TOO_LONG` | 422 | Validation | No |
|
||||
| `VALIDATION_INVALID_MODEL` | 422 | Validation | No |
|
||||
| `MINER_NOT_FOUND` | 404 | Miner | No |
|
||||
| `MINER_OFFLINE` | 503 | Miner | Yes |
|
||||
| `RECEIPT_NOT_FOUND` | 404 | Receipt | No |
|
||||
| `RATE_LIMIT_EXCEEDED` | 429 | Rate | Yes |
|
||||
| `RATE_QUOTA_EXCEEDED` | 429 | Rate | No |
|
||||
| `PAYMENT_INSUFFICIENT_BALANCE` | 402 | Payment | No |
|
||||
| `SYSTEM_INTERNAL_ERROR` | 500 | System | Yes |
|
||||
| `SYSTEM_MAINTENANCE` | 503 | System | Yes |
|
||||
| `SYSTEM_OVERLOADED` | 503 | System | Yes |
|
||||
@@ -1,299 +0,0 @@
|
||||
# Protocol Message Formats
|
||||
|
||||
This document defines the message formats used for communication between AITBC network components.
|
||||
|
||||
## Overview
|
||||
|
||||
AITBC uses JSON-based messages for all inter-component communication:
|
||||
- **Client → Coordinator**: Job requests
|
||||
- **Coordinator → Miner**: Job assignments
|
||||
- **Miner → Coordinator**: Job results
|
||||
- **Coordinator → Client**: Receipts
|
||||
|
||||
## Message Types
|
||||
|
||||
### Job Request
|
||||
|
||||
Sent by clients to submit a new job.
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "job_request",
|
||||
"version": "1.0",
|
||||
"timestamp": "2026-01-24T15:00:00Z",
|
||||
"payload": {
|
||||
"prompt": "Explain quantum computing",
|
||||
"model": "llama3.2",
|
||||
"params": {
|
||||
"max_tokens": 256,
|
||||
"temperature": 0.7,
|
||||
"top_p": 0.9,
|
||||
"stream": false
|
||||
},
|
||||
"client_id": "ait1client...",
|
||||
"nonce": "abc123"
|
||||
},
|
||||
"signature": {
|
||||
"alg": "Ed25519",
|
||||
"key_id": "client-key-001",
|
||||
"sig": "base64..."
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
| Field | Type | Required | Description |
|
||||
|-------|------|----------|-------------|
|
||||
| `type` | string | yes | Message type identifier |
|
||||
| `version` | string | yes | Protocol version |
|
||||
| `timestamp` | ISO8601 | yes | Message creation time |
|
||||
| `payload.prompt` | string | yes | Input text |
|
||||
| `payload.model` | string | yes | Model identifier |
|
||||
| `payload.params` | object | no | Model parameters |
|
||||
| `payload.client_id` | string | yes | Client address |
|
||||
| `payload.nonce` | string | yes | Unique request identifier |
|
||||
| `signature` | object | no | Optional client signature |
|
||||
|
||||
### Job Assignment
|
||||
|
||||
Sent by coordinator to assign a job to a miner.
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "job_assignment",
|
||||
"version": "1.0",
|
||||
"timestamp": "2026-01-24T15:00:01Z",
|
||||
"payload": {
|
||||
"job_id": "job-abc123",
|
||||
"prompt": "Explain quantum computing",
|
||||
"model": "llama3.2",
|
||||
"params": {
|
||||
"max_tokens": 256,
|
||||
"temperature": 0.7
|
||||
},
|
||||
"client_id": "ait1client...",
|
||||
"deadline": "2026-01-24T15:05:00Z",
|
||||
"reward": 5.0
|
||||
},
|
||||
"coordinator_id": "coord-eu-west-1"
|
||||
}
|
||||
```
|
||||
|
||||
| Field | Type | Required | Description |
|
||||
|-------|------|----------|-------------|
|
||||
| `payload.job_id` | string | yes | Unique job identifier |
|
||||
| `payload.deadline` | ISO8601 | yes | Job must complete by this time |
|
||||
| `payload.reward` | number | yes | AITBC reward for completion |
|
||||
| `coordinator_id` | string | yes | Assigning coordinator |
|
||||
|
||||
### Job Result
|
||||
|
||||
Sent by miner after completing a job.
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "job_result",
|
||||
"version": "1.0",
|
||||
"timestamp": "2026-01-24T15:00:05Z",
|
||||
"payload": {
|
||||
"job_id": "job-abc123",
|
||||
"miner_id": "ait1miner...",
|
||||
"result": "Quantum computing is a type of computation...",
|
||||
"result_hash": "sha256:abc123...",
|
||||
"metrics": {
|
||||
"tokens_generated": 150,
|
||||
"inference_time_ms": 2500,
|
||||
"gpu_memory_used_mb": 4096
|
||||
}
|
||||
},
|
||||
"signature": {
|
||||
"alg": "Ed25519",
|
||||
"key_id": "miner-key-001",
|
||||
"sig": "base64..."
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
| Field | Type | Required | Description |
|
||||
|-------|------|----------|-------------|
|
||||
| `payload.result` | string | yes | Generated output |
|
||||
| `payload.result_hash` | string | yes | SHA-256 hash of result |
|
||||
| `payload.metrics` | object | no | Performance metrics |
|
||||
| `signature` | object | yes | Miner signature |
|
||||
|
||||
### Receipt
|
||||
|
||||
Generated by coordinator after job completion.
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "receipt",
|
||||
"version": "1.0",
|
||||
"timestamp": "2026-01-24T15:00:06Z",
|
||||
"payload": {
|
||||
"receipt_id": "rcpt-20260124-001234",
|
||||
"job_id": "job-abc123",
|
||||
"provider": "ait1miner...",
|
||||
"client": "ait1client...",
|
||||
"units": 2.5,
|
||||
"unit_type": "gpu_seconds",
|
||||
"price": 5.0,
|
||||
"model": "llama3.2",
|
||||
"started_at": 1737730801,
|
||||
"completed_at": 1737730805,
|
||||
"result_hash": "sha256:abc123..."
|
||||
},
|
||||
"signature": {
|
||||
"alg": "Ed25519",
|
||||
"key_id": "coord-key-001",
|
||||
"sig": "base64..."
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
See [Receipt Specification](receipt-spec.md) for full details.
|
||||
|
||||
### Miner Registration
|
||||
|
||||
Sent by miner to register with coordinator.
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "miner_registration",
|
||||
"version": "1.0",
|
||||
"timestamp": "2026-01-24T14:00:00Z",
|
||||
"payload": {
|
||||
"miner_id": "ait1miner...",
|
||||
"capabilities": ["llama3.2", "llama3.2:1b", "codellama"],
|
||||
"gpu_info": {
|
||||
"name": "NVIDIA RTX 4090",
|
||||
"memory_gb": 24,
|
||||
"cuda_version": "12.1",
|
||||
"driver_version": "535.104.05"
|
||||
},
|
||||
"endpoint": "http://miner.example.com:8080",
|
||||
"max_concurrent_jobs": 4
|
||||
},
|
||||
"signature": {
|
||||
"alg": "Ed25519",
|
||||
"key_id": "miner-key-001",
|
||||
"sig": "base64..."
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Heartbeat
|
||||
|
||||
Sent periodically by miners to indicate availability.
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "heartbeat",
|
||||
"version": "1.0",
|
||||
"timestamp": "2026-01-24T15:01:00Z",
|
||||
"payload": {
|
||||
"miner_id": "ait1miner...",
|
||||
"status": "available",
|
||||
"current_jobs": 1,
|
||||
"gpu_utilization": 45.5,
|
||||
"memory_used_gb": 8.2
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
| Status | Description |
|
||||
|--------|-------------|
|
||||
| `available` | Ready to accept jobs |
|
||||
| `busy` | Processing at capacity |
|
||||
| `maintenance` | Temporarily unavailable |
|
||||
| `offline` | Shutting down |
|
||||
|
||||
### Error
|
||||
|
||||
Returned when an operation fails.
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "error",
|
||||
"version": "1.0",
|
||||
"timestamp": "2026-01-24T15:00:02Z",
|
||||
"payload": {
|
||||
"error_code": "JOB_NOT_FOUND",
|
||||
"message": "Job with ID job-xyz does not exist",
|
||||
"details": {
|
||||
"job_id": "job-xyz"
|
||||
}
|
||||
},
|
||||
"request_id": "req-123"
|
||||
}
|
||||
```
|
||||
|
||||
## Message Validation
|
||||
|
||||
### Required Fields
|
||||
|
||||
All messages MUST include:
|
||||
- `type` - Message type identifier
|
||||
- `version` - Protocol version (currently "1.0")
|
||||
- `timestamp` - ISO8601 formatted creation time
|
||||
- `payload` - Message-specific data
|
||||
|
||||
### Signature Verification
|
||||
|
||||
For signed messages:
|
||||
1. Extract `payload` as canonical JSON (sorted keys, no whitespace)
|
||||
2. Compute SHA-256 hash of canonical payload
|
||||
3. Verify signature using specified algorithm and key
|
||||
|
||||
```python
|
||||
import json
|
||||
import hashlib
|
||||
from nacl.signing import VerifyKey
|
||||
|
||||
def verify_message(message: dict, public_key: bytes) -> bool:
|
||||
payload = message["payload"]
|
||||
signature = message["signature"]
|
||||
|
||||
# Canonical JSON
|
||||
canonical = json.dumps(payload, sort_keys=True, separators=(',', ':'))
|
||||
payload_hash = hashlib.sha256(canonical.encode()).digest()
|
||||
|
||||
# Verify
|
||||
verify_key = VerifyKey(public_key)
|
||||
try:
|
||||
verify_key.verify(payload_hash, bytes.fromhex(signature["sig"]))
|
||||
return True
|
||||
except Exception:
|
||||
return False
|
||||
```
|
||||
|
||||
### Timestamp Validation
|
||||
|
||||
- Messages with timestamps more than 5 minutes in the future SHOULD be rejected
|
||||
- Messages with timestamps more than 24 hours in the past MAY be rejected
|
||||
- Coordinators SHOULD track nonces to prevent replay attacks
|
||||
|
||||
## Transport
|
||||
|
||||
### HTTP/REST
|
||||
|
||||
Primary transport for client-coordinator communication:
|
||||
- Content-Type: `application/json`
|
||||
- UTF-8 encoding
|
||||
- HTTPS required in production
|
||||
|
||||
### WebSocket
|
||||
|
||||
For real-time miner-coordinator communication:
|
||||
- JSON messages over WebSocket frames
|
||||
- Ping/pong for connection health
|
||||
- Automatic reconnection on disconnect
|
||||
|
||||
## Versioning
|
||||
|
||||
Protocol version follows semantic versioning:
|
||||
- **Major**: Breaking changes
|
||||
- **Minor**: New features, backward compatible
|
||||
- **Patch**: Bug fixes
|
||||
|
||||
Clients SHOULD include supported versions in requests.
|
||||
Servers SHOULD respond with highest mutually supported version.
|
||||
@@ -1,333 +0,0 @@
|
||||
# AITBC Receipt Specification (Draft)
|
||||
|
||||
## Overview
|
||||
|
||||
This document defines the canonical schema and serialization rules for receipts generated by the AITBC network after miners complete compute jobs. Receipts serve as tamper-evident evidence tying compute usage to token minting events.
|
||||
|
||||
## Receipt Fields
|
||||
|
||||
| Field | Type | Required | Description |
|
||||
|-------|------|----------|-------------|
|
||||
| `version` | string | yes | Receipt schema version (e.g., `"1.0"`). |
|
||||
| `receipt_id` | string | yes | Unique identifier for this receipt. SHOULD be globally unique (UUID or hash). |
|
||||
| `job_id` | string | yes | Identifier of the coordinator job this receipt references. |
|
||||
| `provider` | string | yes | Miner address/account that executed the job. |
|
||||
| `client` | string | yes | Client address/account that requested the job. |
|
||||
| `units` | number | yes | Compute units earned by the miner (token minting basis). |
|
||||
| `unit_type` | string | yes | Unit denomination (e.g., `"gpu_seconds"`, `"token_ops"`). |
|
||||
| `price` | number | optional | Price paid by the client for the job (same unit as `units` if applicable). |
|
||||
| `model` | string | optional | Model or workload identifier (e.g., `"runwayml/stable-diffusion-v1-5"`). |
|
||||
| `prompt_hash` | string | optional | Hash of user prompt or workload input to preserve privacy. |
|
||||
| `started_at` | integer | yes | Unix timestamp (seconds) when the job started. |
|
||||
| `completed_at` | integer | yes | Unix timestamp when the job completed. |
|
||||
| `duration_ms` | integer | optional | Milliseconds elapsed during execution. |
|
||||
| `artifact_hash` | string | optional | SHA-256 hash of the result artifact(s). |
|
||||
| `coordinator_id` | string | optional | Coordinator identifier if multiple coordinators exist. |
|
||||
| `nonce` | string | optional | Unique nonce to prevent replay/double minting. |
|
||||
| `chain_id` | integer | optional | Target chain/network identifier. |
|
||||
| `metadata` | object | optional | Arbitrary key/value pairs for future extensions. |
|
||||
| `signature` | object | conditional | Signature object (see below). |
|
||||
|
||||
### Signature Object
|
||||
|
||||
| Field | Type | Required | Description |
|
||||
|-------|------|----------|-------------|
|
||||
| `alg` | string | yes | Signature algorithm (e.g., `"Ed25519"`, `"secp256k1"`). |
|
||||
| `key_id` | string | yes | Identifier of signing key (e.g., `"miner-ed25519-2025-09"`). |
|
||||
| `sig` | string | yes | Base64url-encoded signature over the canonical receipt bytes. |
|
||||
|
||||
Receipts SHOULD be signed by either the miner, coordinator attester, or both depending on trust model. Multiple signatures can be supported by storing them in `metadata.signatures` array.
|
||||
|
||||
## Canonical Serialization
|
||||
|
||||
1. Construct a JSON object containing all fields except `signature`.
|
||||
2. Remove any fields with null/undefined values.
|
||||
3. Sort keys lexicographically at each object level.
|
||||
4. Serialize using UTF-8 without whitespace (RFC 8785 style).
|
||||
5. Compute hash as `sha256(serialized_json)` for Ed25519 signing.
|
||||
6. Attach `signature` object containing algorithm, key ID, and base64url signature over the hash.
|
||||
|
||||
## Validation Rules
|
||||
|
||||
- `completed_at >= started_at`.
|
||||
- `units >= 0` and `price >= 0` when present.
|
||||
- `signature.alg` MUST be one of the network approved algorithms (initially `Ed25519`).
|
||||
- `chain_id` SHOULD match the target blockchain network when provided.
|
||||
- Reject receipts older than network-defined retention period.
|
||||
|
||||
## Example Receipt (unsigned)
|
||||
|
||||
```json
|
||||
{
|
||||
"version": "1.0",
|
||||
"receipt_id": "rcpt-20250926-000123",
|
||||
"job_id": "job-abc123",
|
||||
"provider": "ait1minerxyz...",
|
||||
"client": "ait1clientabc...",
|
||||
"units": 1.9,
|
||||
"unit_type": "gpu_seconds",
|
||||
"price": 4.2,
|
||||
"model": "runwayml/stable-diffusion-v1-5",
|
||||
"prompt_hash": "sha256:cf1f...",
|
||||
"started_at": 1695720000,
|
||||
"completed_at": 1695720002,
|
||||
"artifact_hash": "sha256:deadbeef...",
|
||||
"coordinator_id": "coord-eu-west-1",
|
||||
"nonce": "b7f3d10b",
|
||||
"chain_id": 12345
|
||||
}
|
||||
```
|
||||
|
||||
Signed form includes:
|
||||
|
||||
```json
|
||||
"signature": {
|
||||
"alg": "Ed25519",
|
||||
"key_id": "miner-ed25519-2025-09",
|
||||
"sig": "Fql0..."
|
||||
}
|
||||
```
|
||||
|
||||
## Multi-Signature Receipt Format
|
||||
|
||||
Receipts requiring attestation from multiple parties (e.g., miner + coordinator) use a `signatures` array instead of a single `signature` object.
|
||||
|
||||
### Schema
|
||||
|
||||
| Field | Type | Required | Description |
|
||||
|-------|------|----------|-------------|
|
||||
| `signatures` | array | conditional | Array of signature objects when multi-sig is used. |
|
||||
| `threshold` | integer | optional | Minimum signatures required for validity (default: all). |
|
||||
| `quorum_policy` | string | optional | Policy name: `"all"`, `"majority"`, `"threshold"`. |
|
||||
|
||||
### Signature Entry
|
||||
|
||||
Each entry in the `signatures` array:
|
||||
|
||||
| Field | Type | Required | Description |
|
||||
|-------|------|----------|-------------|
|
||||
| `alg` | string | yes | Signature algorithm (`"Ed25519"`, `"secp256k1"`). |
|
||||
| `key_id` | string | yes | Signing key identifier. |
|
||||
| `signer_role` | string | yes | Role of signer: `"miner"`, `"coordinator"`, `"auditor"`. |
|
||||
| `signer_id` | string | yes | Address or account of the signer. |
|
||||
| `sig` | string | yes | Base64url-encoded signature over canonical receipt bytes. |
|
||||
| `signed_at` | integer | yes | Unix timestamp when signature was created. |
|
||||
|
||||
### Validation Rules
|
||||
|
||||
- When `signatures` is present, `signature` (singular) MUST be absent.
|
||||
- Each signature is computed over the same canonical serialization (excluding all signature fields).
|
||||
- `threshold` defaults to `len(signatures)` (all required) when omitted.
|
||||
- Signers MUST NOT appear more than once in the array.
|
||||
- At least one signer with `signer_role: "miner"` is required.
|
||||
|
||||
### Example
|
||||
|
||||
```json
|
||||
{
|
||||
"version": "1.1",
|
||||
"receipt_id": "rcpt-20260212-ms001",
|
||||
"job_id": "job-xyz789",
|
||||
"provider": "ait1minerabc...",
|
||||
"client": "ait1clientdef...",
|
||||
"units": 3.5,
|
||||
"unit_type": "gpu_seconds",
|
||||
"started_at": 1739376000,
|
||||
"completed_at": 1739376004,
|
||||
"threshold": 2,
|
||||
"quorum_policy": "all",
|
||||
"signatures": [
|
||||
{
|
||||
"alg": "Ed25519",
|
||||
"key_id": "miner-ed25519-2026-02",
|
||||
"signer_role": "miner",
|
||||
"signer_id": "ait1minerabc...",
|
||||
"sig": "Xk9f...",
|
||||
"signed_at": 1739376005
|
||||
},
|
||||
{
|
||||
"alg": "Ed25519",
|
||||
"key_id": "coord-ed25519-2026-01",
|
||||
"signer_role": "coordinator",
|
||||
"signer_id": "coord-eu-west-1",
|
||||
"sig": "Lm3a...",
|
||||
"signed_at": 1739376006
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## ZK-Proof Metadata Extension
|
||||
|
||||
Receipts can carry zero-knowledge proof metadata in the `metadata.zk_proof` field, enabling privacy-preserving verification without revealing sensitive job details.
|
||||
|
||||
### Schema (`metadata.zk_proof`)
|
||||
|
||||
| Field | Type | Required | Description |
|
||||
|-------|------|----------|-------------|
|
||||
| `circuit_id` | string | yes | Identifier of the ZK circuit used (e.g., `"SimpleReceipt_v1"`). |
|
||||
| `circuit_version` | integer | yes | Circuit version number. |
|
||||
| `proof_system` | string | yes | Proof system: `"groth16"`, `"plonk"`, `"stark"`. |
|
||||
| `proof` | object | yes | Proof data (system-specific). |
|
||||
| `public_signals` | array | yes | Public inputs to the circuit. |
|
||||
| `verifier_contract` | string | optional | On-chain verifier contract address. |
|
||||
| `verification_key_hash` | string | optional | SHA-256 hash of the verification key. |
|
||||
| `generated_at` | integer | yes | Unix timestamp of proof generation. |
|
||||
|
||||
### Groth16 Proof Object
|
||||
|
||||
| Field | Type | Description |
|
||||
|-------|------|-------------|
|
||||
| `a` | array[2] | G1 point (π_A). |
|
||||
| `b` | array[2][2] | G2 point (π_B). |
|
||||
| `c` | array[2] | G1 point (π_C). |
|
||||
|
||||
### Public Signals
|
||||
|
||||
For the `SimpleReceipt` circuit, `public_signals` contains:
|
||||
- `[0]`: `receiptHash` — Poseidon hash of private receipt data.
|
||||
|
||||
For the full `ReceiptAttestation` circuit:
|
||||
- `[0]`: `receiptHash`
|
||||
- `[1]`: `settlementAmount`
|
||||
- `[2]`: `timestamp`
|
||||
|
||||
### Validation Rules
|
||||
|
||||
- `circuit_id` MUST match a known registered circuit.
|
||||
- `public_signals[0]` (receiptHash) MUST NOT be zero.
|
||||
- If `verifier_contract` is provided, on-chain verification SHOULD be performed.
|
||||
- Proof MUST be verified before the receipt is accepted for settlement.
|
||||
|
||||
### Example
|
||||
|
||||
```json
|
||||
{
|
||||
"version": "1.1",
|
||||
"receipt_id": "rcpt-20260212-zk001",
|
||||
"job_id": "job-priv001",
|
||||
"provider": "ait1minerxyz...",
|
||||
"client": "ait1clientabc...",
|
||||
"units": 2.0,
|
||||
"unit_type": "gpu_seconds",
|
||||
"started_at": 1739376000,
|
||||
"completed_at": 1739376003,
|
||||
"metadata": {
|
||||
"zk_proof": {
|
||||
"circuit_id": "SimpleReceipt_v1",
|
||||
"circuit_version": 1,
|
||||
"proof_system": "groth16",
|
||||
"proof": {
|
||||
"a": ["0x1a2b...", "0x3c4d..."],
|
||||
"b": [["0x5e6f...", "0x7a8b..."], ["0x9c0d...", "0xef12..."]],
|
||||
"c": ["0x3456...", "0x7890..."]
|
||||
},
|
||||
"public_signals": ["0x48fa91c3..."],
|
||||
"verifier_contract": "0xAbCdEf0123456789...",
|
||||
"verification_key_hash": "sha256:a1b2c3d4...",
|
||||
"generated_at": 1739376004
|
||||
}
|
||||
},
|
||||
"signature": {
|
||||
"alg": "Ed25519",
|
||||
"key_id": "miner-ed25519-2026-02",
|
||||
"sig": "Qr7x..."
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Merkle Proof Anchoring Specification
|
||||
|
||||
Receipts can be anchored on-chain using Merkle trees, allowing efficient batch verification and compact inclusion proofs.
|
||||
|
||||
### Anchoring Process
|
||||
|
||||
1. **Batch collection**: Coordinator collects N receipts within a time window.
|
||||
2. **Leaf computation**: Each leaf = `sha256(canonical_receipt_bytes)`.
|
||||
3. **Tree construction**: Binary Merkle tree with leaves sorted by `receipt_id`.
|
||||
4. **Root submission**: Merkle root is submitted on-chain in a single transaction.
|
||||
5. **Proof distribution**: Each receipt owner receives their inclusion proof.
|
||||
|
||||
### Schema (`metadata.merkle_anchor`)
|
||||
|
||||
| Field | Type | Required | Description |
|
||||
|-------|------|----------|-------------|
|
||||
| `root` | string | yes | Hex-encoded Merkle root (`0x...`). |
|
||||
| `leaf` | string | yes | Hex-encoded leaf hash of this receipt. |
|
||||
| `proof` | array | yes | Array of sibling hashes from leaf to root. |
|
||||
| `index` | integer | yes | Leaf index in the tree (0-based). |
|
||||
| `tree_size` | integer | yes | Total number of leaves in the tree. |
|
||||
| `block_height` | integer | optional | Blockchain block height where root was anchored. |
|
||||
| `tx_hash` | string | optional | Transaction hash of the anchoring transaction. |
|
||||
| `anchored_at` | integer | yes | Unix timestamp of anchoring. |
|
||||
|
||||
### Verification Algorithm
|
||||
|
||||
```
|
||||
1. leaf_hash = sha256(canonical_receipt_bytes)
|
||||
2. assert leaf_hash == anchor.leaf
|
||||
3. current = leaf_hash
|
||||
4. for i, sibling in enumerate(anchor.proof):
|
||||
5. if bit(anchor.index, i) == 0:
|
||||
6. current = sha256(current || sibling)
|
||||
7. else:
|
||||
8. current = sha256(sibling || current)
|
||||
9. assert current == anchor.root
|
||||
```
|
||||
|
||||
### Validation Rules
|
||||
|
||||
- `leaf` MUST equal `sha256(canonical_receipt_bytes)` of the receipt.
|
||||
- `proof` length MUST equal `ceil(log2(tree_size))`.
|
||||
- `index` MUST be in range `[0, tree_size)`.
|
||||
- If `tx_hash` is provided, the root MUST match the on-chain value.
|
||||
- Merkle roots MUST be submitted within the network retention window.
|
||||
|
||||
### Example
|
||||
|
||||
```json
|
||||
{
|
||||
"version": "1.1",
|
||||
"receipt_id": "rcpt-20260212-mk001",
|
||||
"job_id": "job-batch42",
|
||||
"provider": "ait1minerxyz...",
|
||||
"client": "ait1clientabc...",
|
||||
"units": 1.0,
|
||||
"unit_type": "gpu_seconds",
|
||||
"started_at": 1739376000,
|
||||
"completed_at": 1739376001,
|
||||
"metadata": {
|
||||
"merkle_anchor": {
|
||||
"root": "0x7f83b1657ff1fc53b92dc18148a1d65dfc2d4b1fa3d677284addd200126d9069",
|
||||
"leaf": "0x3e23e8160039594a33894f6564e1b1348bbd7a0088d42c4acb73eeaed59c009d",
|
||||
"proof": [
|
||||
"0x2cf24dba5fb0a30e26e83b2ac5b9e29e1b161e5c1fa7425e73043362938b9824",
|
||||
"0x486ea46224d1bb4fb680f34f7c9ad96a8f24ec88be73ea8e5a6c65260e9cb8a7"
|
||||
],
|
||||
"index": 2,
|
||||
"tree_size": 4,
|
||||
"block_height": 15234,
|
||||
"tx_hash": "0xabc123def456...",
|
||||
"anchored_at": 1739376060
|
||||
}
|
||||
},
|
||||
"signature": {
|
||||
"alg": "Ed25519",
|
||||
"key_id": "coord-ed25519-2026-01",
|
||||
"sig": "Yz4w..."
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Version History
|
||||
|
||||
| Version | Date | Changes |
|
||||
|---------|------|---------|
|
||||
| 1.0 | 2025-09 | Initial receipt schema, Ed25519 signatures, canonical serialization. |
|
||||
| 1.1 | 2026-02 | Multi-signature format, ZK-proof metadata extension, Merkle proof anchoring. |
|
||||
@@ -1,286 +0,0 @@
|
||||
# AITBC Threat Modeling: Privacy Features
|
||||
|
||||
## Overview
|
||||
|
||||
This document provides a comprehensive threat model for AITBC's privacy-preserving features, focusing on zero-knowledge receipt attestation and confidential transactions. The analysis uses the STRIDE methodology to systematically identify threats and their mitigations.
|
||||
|
||||
## Document Version
|
||||
- Version: 1.0
|
||||
- Date: December 2024
|
||||
- Status: Published - Shared with Ecosystem Partners
|
||||
|
||||
## Scope
|
||||
|
||||
### In-Scope Components
|
||||
1. **ZK Receipt Attestation System**
|
||||
- Groth16 circuit implementation
|
||||
- Proof generation service
|
||||
- Verification contract
|
||||
- Trusted setup ceremony
|
||||
|
||||
2. **Confidential Transaction System**
|
||||
- Hybrid encryption (AES-256-GCM + X25519)
|
||||
- HSM-backed key management
|
||||
- Access control system
|
||||
- Audit logging infrastructure
|
||||
|
||||
### Out-of-Scope Components
|
||||
- Core blockchain consensus
|
||||
- Basic transaction processing
|
||||
- Non-confidential marketplace operations
|
||||
- Network layer security
|
||||
|
||||
## Threat Actors
|
||||
|
||||
| Actor | Motivation | Capability | Impact |
|
||||
|-------|------------|------------|--------|
|
||||
| Malicious Miner | Financial gain, sabotage | Access to mining software, limited compute | High |
|
||||
| Compromised Coordinator | Data theft, market manipulation | System access, private keys | Critical |
|
||||
| External Attacker | Financial theft, privacy breach | Public network, potential exploits | High |
|
||||
| Regulator | Compliance investigation | Legal authority, subpoenas | Medium |
|
||||
| Insider Threat | Data exfiltration | Internal access, knowledge | High |
|
||||
| Quantum Computer | Break cryptography | Future quantum capability | Future |
|
||||
|
||||
## STRIDE Analysis
|
||||
|
||||
### 1. Spoofing
|
||||
|
||||
#### ZK Receipt Attestation
|
||||
| Threat | Description | Likelihood | Impact | Mitigations |
|
||||
|--------|-------------|------------|--------|-------------|
|
||||
| Proof Forgery | Attacker creates fake ZK proofs | Medium | High | ✅ Groth16 soundness property<br>✅ Verification on-chain<br>⚠️ Trusted setup security |
|
||||
| Identity Spoofing | Miner impersonates another | Low | Medium | ✅ Miner registration with KYC<br>✅ Cryptographic signatures |
|
||||
| Coordinator Impersonation | Fake coordinator services | Low | High | ✅ TLS certificates<br>⚠️ DNSSEC recommended |
|
||||
|
||||
#### Confidential Transactions
|
||||
| Threat | Description | Likelihood | Impact | Mitigations |
|
||||
|--------|-------------|------------|--------|-------------|
|
||||
| Key Spoofing | Fake public keys for participants | Medium | High | ✅ HSM-protected keys<br>✅ Certificate validation |
|
||||
| Authorization Forgery | Fake audit authorization | Low | High | ✅ Signed tokens<br>✅ Short expiration times |
|
||||
|
||||
### 2. Tampering
|
||||
|
||||
#### ZK Receipt Attestation
|
||||
| Threat | Description | Likelihood | Impact | Mitigations |
|
||||
|--------|-------------|------------|--------|-------------|
|
||||
| Circuit Modification | Malicious changes to circom circuit | Low | Critical | ✅ Open-source circuits<br>✅ Circuit hash verification |
|
||||
| Proof Manipulation | Altering proofs during transmission | Medium | High | ✅ End-to-end encryption<br>✅ On-chain verification |
|
||||
| Setup Parameter Poisoning | Compromise trusted setup | Low | Critical | ⚠️ Multi-party ceremony needed<br>⚠️ Secure destruction of toxic waste |
|
||||
|
||||
#### Confidential Transactions
|
||||
| Threat | Description | Likelihood | Impact | Mitigations |
|
||||
|--------|-------------|------------|--------|-------------|
|
||||
| Data Tampering | Modify encrypted transaction data | Medium | High | ✅ AES-GCM authenticity<br>✅ Immutable audit logs |
|
||||
| Key Substitution | Swap public keys in transit | Low | High | ✅ Certificate pinning<br>✅ HSM key validation |
|
||||
| Access Control Bypass | Override authorization checks | Low | High | ✅ Role-based access control<br>✅ Audit logging of all changes |
|
||||
|
||||
### 3. Repudiation
|
||||
|
||||
#### ZK Receipt Attestation
|
||||
| Threat | Description | Likelihood | Impact | Mitigations |
|
||||
|--------|-------------|------------|--------|-------------|
|
||||
| Denial of Proof Generation | Miner denies creating proof | Low | Medium | ✅ On-chain proof records<br>✅ Signed proof metadata |
|
||||
| Receipt Denial | Party denies transaction occurred | Medium | Medium | ✅ Immutable blockchain ledger<br>✅ Cryptographic receipts |
|
||||
|
||||
#### Confidential Transactions
|
||||
| Threat | Description | Likelihood | Impact | Mitigations |
|
||||
|--------|-------------|------------|--------|-------------|
|
||||
| Access Denial | User denies accessing data | Low | Medium | ✅ Comprehensive audit logs<br>✅ Non-repudiation signatures |
|
||||
| Key Generation Denial | Deny creating encryption keys | Low | Medium | ✅ HSM audit trails<br>✅ Key rotation logs |
|
||||
|
||||
### 4. Information Disclosure
|
||||
|
||||
#### ZK Receipt Attestation
|
||||
| Threat | Description | Likelihood | Impact | Mitigations |
|
||||
|--------|-------------|------------|--------|-------------|
|
||||
| Witness Extraction | Extract private inputs from proof | Low | Critical | ✅ Zero-knowledge property<br>✅ No knowledge of witness |
|
||||
| Setup Parameter Leak | Expose toxic waste from trusted setup | Low | Critical | ⚠️ Secure multi-party setup<br>⚠️ Parameter destruction |
|
||||
| Side-Channel Attacks | Timing/power analysis | Low | Medium | ✅ Constant-time implementations<br>⚠️ Needs hardware security review |
|
||||
|
||||
#### Confidential Transactions
|
||||
| Threat | Description | Likelihood | Impact | Mitigations |
|
||||
|--------|-------------|------------|--------|-------------|
|
||||
| Private Key Extraction | Steal keys from HSM | Low | Critical | ✅ HSM security controls<br>✅ Hardware tamper resistance |
|
||||
| Decryption Key Leak | Expose DEKs | Medium | High | ✅ Per-transaction DEKs<br>✅ Encrypted key storage |
|
||||
| Metadata Analysis | Infer data from access patterns | Medium | Medium | ✅ Access logging<br>⚠️ Differential privacy needed |
|
||||
|
||||
### 5. Denial of Service
|
||||
|
||||
#### ZK Receipt Attestation
|
||||
| Threat | Description | Likelihood | Impact | Mitigations |
|
||||
|--------|-------------|------------|--------|-------------|
|
||||
| Proof Generation DoS | Overwhelm proof service | High | Medium | ✅ Rate limiting<br>✅ Queue management<br>⚠️ Need monitoring |
|
||||
| Verification Spam | Flood verification contract | High | High | ✅ Gas costs limit spam<br>⚠️ Need circuit optimization |
|
||||
|
||||
#### Confidential Transactions
|
||||
| Threat | Description | Likelihood | Impact | Mitigations |
|
||||
|--------|-------------|------------|--------|-------------|
|
||||
| Key Exhaustion | Deplete HSM key slots | Medium | Medium | ✅ Key rotation<br>✅ Resource monitoring |
|
||||
| Database Overload | Saturate with encrypted data | High | Medium | ✅ Connection pooling<br>✅ Query optimization |
|
||||
| Audit Log Flooding | Fill audit storage | Medium | Medium | ✅ Log rotation<br>✅ Storage monitoring |
|
||||
|
||||
### 6. Elevation of Privilege
|
||||
|
||||
#### ZK Receipt Attestation
|
||||
| Threat | Description | Likelihood | Impact | Mitigations |
|
||||
|--------|-------------|------------|--------|-------------|
|
||||
| Setup Privilege | Gain trusted setup access | Low | Critical | ⚠️ Multi-party ceremony<br>⚠️ Independent audits |
|
||||
| Coordinator Compromise | Full system control | Medium | Critical | ✅ Multi-sig controls<br>✅ Regular security audits |
|
||||
|
||||
#### Confidential Transactions
|
||||
| Threat | Description | Likelihood | Impact | Mitigations |
|
||||
|--------|-------------|------------|--------|-------------|
|
||||
| HSM Takeover | Gain HSM admin access | Low | Critical | ✅ HSM access controls<br>✅ Dual authorization |
|
||||
| Access Control Escalation | Bypass role restrictions | Medium | High | ✅ Principle of least privilege<br>✅ Regular access reviews |
|
||||
|
||||
## Risk Matrix
|
||||
|
||||
| Threat | Likelihood | Impact | Risk Level | Priority |
|
||||
|--------|------------|--------|------------|----------|
|
||||
| Trusted Setup Compromise | Low | Critical | HIGH | 1 |
|
||||
| HSM Compromise | Low | Critical | HIGH | 1 |
|
||||
| Proof Forgery | Medium | High | HIGH | 2 |
|
||||
| Private Key Extraction | Low | Critical | HIGH | 2 |
|
||||
| Information Disclosure | Medium | High | MEDIUM | 3 |
|
||||
| DoS Attacks | High | Medium | MEDIUM | 3 |
|
||||
| Side-Channel Attacks | Low | Medium | LOW | 4 |
|
||||
| Repudiation | Low | Medium | LOW | 4 |
|
||||
|
||||
## Implemented Mitigations
|
||||
|
||||
### ZK Receipt Attestation
|
||||
- ✅ Groth16 soundness and zero-knowledge properties
|
||||
- ✅ On-chain verification prevents tampering
|
||||
- ✅ Open-source circuit code for transparency
|
||||
- ✅ Rate limiting on proof generation
|
||||
- ✅ Comprehensive audit logging
|
||||
|
||||
### Confidential Transactions
|
||||
- ✅ AES-256-GCM provides confidentiality and authenticity
|
||||
- ✅ HSM-backed key management prevents key extraction
|
||||
- ✅ Role-based access control with time restrictions
|
||||
- ✅ Per-transaction DEKs for forward secrecy
|
||||
- ✅ Immutable audit trails with chain of hashes
|
||||
- ✅ Multi-factor authentication for sensitive operations
|
||||
|
||||
## Recommended Future Improvements
|
||||
|
||||
### Short Term (1-3 months)
|
||||
1. **Trusted Setup Ceremony**
|
||||
- Implement multi-party computation (MPC) setup
|
||||
- Engage independent auditors
|
||||
- Publicly document process
|
||||
|
||||
2. **Enhanced Monitoring**
|
||||
- Real-time threat detection
|
||||
- Anomaly detection for access patterns
|
||||
- Automated alerting for security events
|
||||
|
||||
3. **Security Testing**
|
||||
- Penetration testing by third party
|
||||
- Side-channel resistance evaluation
|
||||
- Fuzzing of circuit implementations
|
||||
|
||||
### Medium Term (3-6 months)
|
||||
1. **Advanced Privacy**
|
||||
- Differential privacy for metadata
|
||||
- Secure multi-party computation
|
||||
- Homomorphic encryption support
|
||||
|
||||
2. **Quantum Resistance**
|
||||
- Evaluate post-quantum schemes
|
||||
- Migration planning for quantum threats
|
||||
- Hybrid cryptography implementations
|
||||
|
||||
3. **Compliance Automation**
|
||||
- Automated compliance reporting
|
||||
- Privacy impact assessments
|
||||
- Regulatory audit tools
|
||||
|
||||
### Long Term (6-12 months)
|
||||
1. **Formal Verification**
|
||||
- Formal proofs of circuit correctness
|
||||
- Verified smart contract deployments
|
||||
- Mathematical security proofs
|
||||
|
||||
2. **Decentralized Trust**
|
||||
- Distributed key generation
|
||||
- Threshold cryptography
|
||||
- Community governance of security
|
||||
|
||||
## Security Controls Summary
|
||||
|
||||
### Preventive Controls
|
||||
- Cryptographic guarantees (ZK proofs, encryption)
|
||||
- Access control mechanisms
|
||||
- Secure key management
|
||||
- Network security (TLS, certificates)
|
||||
|
||||
### Detective Controls
|
||||
- Comprehensive audit logging
|
||||
- Real-time monitoring
|
||||
- Anomaly detection
|
||||
- Security incident response
|
||||
|
||||
### Corrective Controls
|
||||
- Key rotation procedures
|
||||
- Incident response playbooks
|
||||
- Backup and recovery
|
||||
- System patching processes
|
||||
|
||||
### Compensating Controls
|
||||
- Insurance for cryptographic risks
|
||||
- Legal protections
|
||||
- Community oversight
|
||||
- Bug bounty programs
|
||||
|
||||
## Compliance Mapping
|
||||
|
||||
| Regulation | Requirement | Implementation |
|
||||
|------------|-------------|----------------|
|
||||
| GDPR | Right to encryption | ✅ Opt-in confidential transactions |
|
||||
| GDPR | Data minimization | ✅ Selective disclosure |
|
||||
| SEC 17a-4 | Audit trail | ✅ Immutable logs |
|
||||
| MiFID II | Transaction reporting | ✅ ZK proof verification |
|
||||
| PCI DSS | Key management | ✅ HSM-backed keys |
|
||||
|
||||
## Incident Response
|
||||
|
||||
### Security Event Classification
|
||||
1. **Critical** - HSM compromise, trusted setup breach
|
||||
2. **High** - Large-scale data breach, proof forgery
|
||||
3. **Medium** - Single key compromise, access violation
|
||||
4. **Low** - Failed authentication, minor DoS
|
||||
|
||||
### Response Procedures
|
||||
1. Immediate containment
|
||||
2. Evidence preservation
|
||||
3. Stakeholder notification
|
||||
4. Root cause analysis
|
||||
5. Remediation actions
|
||||
6. Post-incident review
|
||||
|
||||
## Review Schedule
|
||||
|
||||
- **Monthly**: Security monitoring review
|
||||
- **Quarterly**: Threat model update
|
||||
- **Semi-annually**: Penetration testing
|
||||
- **Annually**: Full security audit
|
||||
|
||||
## Contact Information
|
||||
|
||||
- Security Team: security@aitbc.io
|
||||
- Bug Reports: security-bugs@aitbc.io
|
||||
- Security Researchers: research@aitbc.io
|
||||
|
||||
## Acknowledgments
|
||||
|
||||
This threat model was developed with input from:
|
||||
- AITBC Security Team
|
||||
- External Security Consultants
|
||||
- Community Security Researchers
|
||||
- Cryptography Experts
|
||||
|
||||
---
|
||||
|
||||
*This document is living and will be updated as new threats emerge and mitigations are implemented.*
|
||||
@@ -1,166 +0,0 @@
|
||||
# ZK Receipt Attestation Implementation Summary
|
||||
|
||||
## Overview
|
||||
|
||||
Successfully implemented a zero-knowledge proof system for privacy-preserving receipt attestation in AITBC, enabling confidential settlements while maintaining verifiability.
|
||||
|
||||
## Components Implemented
|
||||
|
||||
### 1. ZK Circuits (`apps/zk-circuits/`)
|
||||
- **Basic Circuit**: Receipt hash preimage proof in circom
|
||||
- **Advanced Circuit**: Full receipt validation with pricing (WIP)
|
||||
- **Build System**: npm scripts for compilation, setup, and proving
|
||||
- **Testing**: Proof generation and verification tests
|
||||
- **Benchmarking**: Performance measurement tools
|
||||
|
||||
### 2. Proof Service (`apps/coordinator-api/src/app/services/zk_proofs.py`)
|
||||
- **ZKProofService**: Handles proof generation and verification
|
||||
- **Privacy Levels**: Basic (hide computation) and Enhanced (hide amounts)
|
||||
- **Integration**: Works with existing receipt signing system
|
||||
- **Error Handling**: Graceful fallback when ZK unavailable
|
||||
|
||||
### 3. Receipt Integration (`apps/coordinator-api/src/app/services/receipts.py`)
|
||||
- **Async Support**: Updated create_receipt to support async ZK generation
|
||||
- **Optional Privacy**: ZK proofs generated only when requested
|
||||
- **Backward Compatibility**: Existing receipts work unchanged
|
||||
|
||||
### 4. Verification Contract (`contracts/ZKReceiptVerifier.sol`)
|
||||
- **On-Chain Verification**: Groth16 proof verification
|
||||
- **Security Features**: Double-spend prevention, timestamp validation
|
||||
- **Authorization**: Controlled access to verification functions
|
||||
- **Batch Support**: Efficient batch verification
|
||||
|
||||
### 5. Settlement Integration (`apps/coordinator-api/aitbc/settlement/hooks.py`)
|
||||
- **Privacy Options**: Settlement requests can specify privacy level
|
||||
- **Proof Inclusion**: ZK proofs included in settlement messages
|
||||
- **Bridge Support**: Works with existing cross-chain bridges
|
||||
|
||||
## Key Features
|
||||
|
||||
### Privacy Levels
|
||||
1. **Basic**: Hide computation details, reveal settlement amount
|
||||
2. **Enhanced**: Hide all amounts, prove correctness mathematically
|
||||
|
||||
### Performance Metrics
|
||||
- **Proof Size**: ~200 bytes (Groth16)
|
||||
- **Generation Time**: 5-15 seconds
|
||||
- **Verification Time**: <5ms on-chain
|
||||
- **Gas Cost**: ~200k gas
|
||||
|
||||
### Security Measures
|
||||
- Trusted setup requirements documented
|
||||
- Circuit audit procedures defined
|
||||
- Gradual rollout strategy
|
||||
- Emergency pause capabilities
|
||||
|
||||
## Testing Coverage
|
||||
|
||||
### Unit Tests
|
||||
- Proof generation with various inputs
|
||||
- Verification success/failure scenarios
|
||||
- Privacy level validation
|
||||
- Error handling
|
||||
|
||||
### Integration Tests
|
||||
- Receipt creation with ZK proofs
|
||||
- Settlement flow with privacy
|
||||
- Cross-chain bridge integration
|
||||
|
||||
### Benchmarks
|
||||
- Proof generation time measurement
|
||||
- Verification performance
|
||||
- Memory usage tracking
|
||||
- Gas cost estimation
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Creating Private Receipt
|
||||
```python
|
||||
receipt = await receipt_service.create_receipt(
|
||||
job=job,
|
||||
miner_id=miner_id,
|
||||
job_result=result,
|
||||
result_metrics=metrics,
|
||||
privacy_level="basic" # Enable ZK proof
|
||||
)
|
||||
```
|
||||
|
||||
### Cross-Chain Settlement with Privacy
|
||||
```python
|
||||
settlement = await settlement_hook.initiate_manual_settlement(
|
||||
job_id="job-123",
|
||||
target_chain_id=2,
|
||||
use_zk_proof=True,
|
||||
privacy_level="enhanced"
|
||||
)
|
||||
```
|
||||
|
||||
### On-Chain Verification
|
||||
```solidity
|
||||
bool verified = verifier.verifyAndRecord(
|
||||
proof.a,
|
||||
proof.b,
|
||||
proof.c,
|
||||
proof.publicSignals
|
||||
);
|
||||
```
|
||||
|
||||
## Current Status
|
||||
|
||||
### Completed ✅
|
||||
1. Research and technology selection (Groth16)
|
||||
2. Development environment setup
|
||||
3. Basic circuit implementation
|
||||
4. Proof generation service
|
||||
5. Verification contract
|
||||
6. Settlement integration
|
||||
7. Comprehensive testing
|
||||
8. Performance benchmarking
|
||||
|
||||
### Pending ⏳
|
||||
1. Trusted setup ceremony (production requirement)
|
||||
2. Circuit security audit
|
||||
3. Full receipt validation circuit
|
||||
4. Production deployment
|
||||
|
||||
## Next Steps for Production
|
||||
|
||||
### Immediate (Week 1-2)
|
||||
1. Run end-to-end tests with real data
|
||||
2. Performance optimization based on benchmarks
|
||||
3. Security review of implementation
|
||||
|
||||
### Short Term (Month 1)
|
||||
1. Plan and execute trusted setup ceremony
|
||||
2. Complete advanced circuit with signature verification
|
||||
3. Third-party security audit
|
||||
|
||||
### Long Term (Month 2-3)
|
||||
1. Production deployment with gradual rollout
|
||||
2. Monitor performance and gas costs
|
||||
3. Consider PLONK for universal setup
|
||||
|
||||
## Risks and Mitigations
|
||||
|
||||
### Technical Risks
|
||||
- **Trusted Setup**: Mitigate with multi-party ceremony
|
||||
- **Performance**: Optimize circuits and use batch verification
|
||||
- **Complexity**: Maintain clear documentation and examples
|
||||
|
||||
### Operational Risks
|
||||
- **User Adoption**: Provide clear UI indicators for privacy
|
||||
- **Gas Costs**: Optimize proof size and verification
|
||||
- **Regulatory**: Ensure compliance with privacy regulations
|
||||
|
||||
## Documentation
|
||||
|
||||
- [ZK Technology Comparison](zk-technology-comparison.md)
|
||||
- [Circuit Design](zk-receipt-attestation.md)
|
||||
- [Development Guide](../apps/zk-circuits/README.md)
|
||||
- [API Documentation](../docs/api/coordinator/endpoints.md)
|
||||
|
||||
## Conclusion
|
||||
|
||||
The ZK receipt attestation system provides a solid foundation for privacy-preserving settlements in AITBC. The implementation balances privacy, performance, and usability while maintaining backward compatibility with existing systems.
|
||||
|
||||
The modular design allows for gradual adoption and future enhancements, making it suitable for both testing and production deployment.
|
||||
@@ -1,260 +0,0 @@
|
||||
# Zero-Knowledge Receipt Attestation Design
|
||||
|
||||
## Overview
|
||||
|
||||
This document outlines the design for adding zero-knowledge proof capabilities to the AITBC receipt attestation system, enabling privacy-preserving settlement flows while maintaining verifiability.
|
||||
|
||||
## Goals
|
||||
|
||||
1. **Privacy**: Hide sensitive transaction details (amounts, parties, specific computations)
|
||||
2. **Verifiability**: Prove receipts are valid and correctly signed without revealing contents
|
||||
3. **Compatibility**: Work with existing receipt signing and settlement systems
|
||||
4. **Efficiency**: Minimize proof generation and verification overhead
|
||||
|
||||
## Architecture
|
||||
|
||||
### Current Receipt System
|
||||
|
||||
The existing system has:
|
||||
- Receipt signing with coordinator private key
|
||||
- Optional coordinator attestations
|
||||
- History retrieval endpoints
|
||||
- Cross-chain settlement hooks
|
||||
|
||||
Receipt structure includes:
|
||||
- Job ID and metadata
|
||||
- Computation results
|
||||
- Pricing information
|
||||
- Miner and coordinator signatures
|
||||
|
||||
### Privacy-Preserving Flow
|
||||
|
||||
```
|
||||
1. Job Execution
|
||||
↓
|
||||
2. Receipt Generation (clear text)
|
||||
↓
|
||||
3. ZK Circuit Input Preparation
|
||||
↓
|
||||
4. ZK Proof Generation
|
||||
↓
|
||||
5. On-Chain Settlement (with proof)
|
||||
↓
|
||||
6. Verification (without revealing data)
|
||||
```
|
||||
|
||||
## ZK Circuit Design
|
||||
|
||||
### What to Prove
|
||||
|
||||
1. **Receipt Validity**
|
||||
- Receipt was signed by coordinator
|
||||
- Computation was performed correctly
|
||||
- Pricing follows agreed rules
|
||||
|
||||
2. **Settlement Conditions**
|
||||
- Amount owed is correctly calculated
|
||||
- Parties have sufficient funds/balance
|
||||
- Cross-chain transfer conditions met
|
||||
|
||||
### What to Hide
|
||||
|
||||
1. **Sensitive Data**
|
||||
- Actual computation amounts
|
||||
- Specific job details
|
||||
- Pricing rates
|
||||
- Participant identities
|
||||
|
||||
### Circuit Components
|
||||
|
||||
```circom
|
||||
// High-level circuit structure
|
||||
template ReceiptAttestation() {
|
||||
// Public inputs
|
||||
signal input receiptHash;
|
||||
signal input settlementAmount;
|
||||
signal input timestamp;
|
||||
|
||||
// Private inputs
|
||||
signal input receipt;
|
||||
signal input computationResult;
|
||||
signal input pricingRate;
|
||||
signal input minerReward;
|
||||
|
||||
// Verify receipt signature
|
||||
component signatureVerifier = ECDSAVerify();
|
||||
// ... signature verification logic
|
||||
|
||||
// Verify computation correctness
|
||||
component computationChecker = ComputationVerify();
|
||||
// ... computation verification logic
|
||||
|
||||
// Verify pricing calculation
|
||||
component pricingVerifier = PricingVerify();
|
||||
// ... pricing verification logic
|
||||
|
||||
// Output settlement proof
|
||||
settlementAmount <== minerReward + coordinatorFee;
|
||||
}
|
||||
```
|
||||
|
||||
## Implementation Plan
|
||||
|
||||
### Phase 1: Research & Prototyping
|
||||
1. **Library Selection**
|
||||
- snarkjs for development (JavaScript/TypeScript)
|
||||
- circomlib2 for standard circuits
|
||||
- Web3.js for blockchain integration
|
||||
|
||||
2. **Basic Circuit**
|
||||
- Simple receipt hash preimage proof
|
||||
- ECDSA signature verification
|
||||
- Basic arithmetic operations
|
||||
|
||||
### Phase 2: Integration
|
||||
1. **Coordinator API Updates**
|
||||
- Add ZK proof generation endpoint
|
||||
- Integrate with existing receipt signing
|
||||
- Add proof verification utilities
|
||||
|
||||
2. **Settlement Flow**
|
||||
- Modify cross-chain hooks to accept proofs
|
||||
- Update verification logic
|
||||
- Maintain backward compatibility
|
||||
|
||||
### Phase 3: Optimization
|
||||
1. **Performance**
|
||||
- Trusted setup for Groth16
|
||||
- Batch proof generation
|
||||
- Recursive proofs for complex receipts
|
||||
|
||||
2. **Security**
|
||||
- Audit circuits
|
||||
- Formal verification
|
||||
- Side-channel resistance
|
||||
|
||||
## Data Flow
|
||||
|
||||
### Proof Generation (Coordinator)
|
||||
|
||||
```python
|
||||
async def generate_receipt_proof(receipt: Receipt) -> ZKProof:
|
||||
# 1. Prepare circuit inputs
|
||||
public_inputs = {
|
||||
"receiptHash": hash_receipt(receipt),
|
||||
"settlementAmount": calculate_settlement(receipt),
|
||||
"timestamp": receipt.timestamp
|
||||
}
|
||||
|
||||
private_inputs = {
|
||||
"receipt": receipt,
|
||||
"computationResult": receipt.result,
|
||||
"pricingRate": receipt.pricing.rate,
|
||||
"minerReward": receipt.pricing.miner_reward
|
||||
}
|
||||
|
||||
# 2. Generate witness
|
||||
witness = generate_witness(public_inputs, private_inputs)
|
||||
|
||||
# 3. Generate proof
|
||||
proof = groth16.prove(witness, proving_key)
|
||||
|
||||
return {
|
||||
"proof": proof,
|
||||
"publicSignals": public_inputs
|
||||
}
|
||||
```
|
||||
|
||||
### Proof Verification (On-Chain/Settlement Layer)
|
||||
|
||||
```solidity
|
||||
contract SettlementVerifier {
|
||||
// Groth16 verifier
|
||||
function verifySettlement(
|
||||
uint256[2] memory a,
|
||||
uint256[2][2] memory b,
|
||||
uint256[2] memory c,
|
||||
uint256[] memory input
|
||||
) public pure returns (bool) {
|
||||
return verifyProof(a, b, c, input);
|
||||
}
|
||||
|
||||
function settleWithProof(
|
||||
address recipient,
|
||||
uint256 amount,
|
||||
ZKProof memory proof
|
||||
) public {
|
||||
require(verifySettlement(proof.a, proof.b, proof.c, proof.inputs));
|
||||
// Execute settlement
|
||||
_transfer(recipient, amount);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Privacy Levels
|
||||
|
||||
### Level 1: Basic Privacy
|
||||
- Hide computation amounts
|
||||
- Prove pricing correctness
|
||||
- Reveal participant identities
|
||||
|
||||
### Level 2: Enhanced Privacy
|
||||
- Hide all amounts
|
||||
- Zero-knowledge participant proofs
|
||||
- Anonymous settlement
|
||||
|
||||
### Level 3: Full Privacy
|
||||
- Complete transaction privacy
|
||||
- Ring signatures or similar
|
||||
- Confidential transfers
|
||||
|
||||
## Security Considerations
|
||||
|
||||
1. **Trusted Setup**
|
||||
- Multi-party ceremony for Groth16
|
||||
- Documentation of setup process
|
||||
- Toxic waste destruction proof
|
||||
|
||||
2. **Circuit Security**
|
||||
- Constant-time operations
|
||||
- No side-channel leaks
|
||||
- Formal verification where possible
|
||||
|
||||
3. **Integration Security**
|
||||
- Maintain existing security guarantees
|
||||
- Fail-safe verification
|
||||
- Gradual rollout with monitoring
|
||||
|
||||
## Migration Strategy
|
||||
|
||||
1. **Parallel Operation**
|
||||
- Run both clear and ZK receipts
|
||||
- Gradual opt-in adoption
|
||||
- Performance monitoring
|
||||
|
||||
2. **Backward Compatibility**
|
||||
- Existing receipts remain valid
|
||||
- Optional ZK proofs
|
||||
- Graceful degradation
|
||||
|
||||
3. **Network Upgrade**
|
||||
- Coordinate with all participants
|
||||
- Clear communication
|
||||
- Rollback capability
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. **Research Task**
|
||||
- Evaluate zk-SNARKs vs zk-STARKs trade-offs
|
||||
- Benchmark proof generation times
|
||||
- Assess gas costs for on-chain verification
|
||||
|
||||
2. **Prototype Development**
|
||||
- Implement basic circuit in circom
|
||||
- Create proof generation service
|
||||
- Build verification contract
|
||||
|
||||
3. **Integration Planning**
|
||||
- Design API changes
|
||||
- Plan data migration
|
||||
- Prepare rollout strategy
|
||||
@@ -1,181 +0,0 @@
|
||||
# ZK Technology Comparison for Receipt Attestation
|
||||
|
||||
## Overview
|
||||
|
||||
Analysis of zero-knowledge proof systems for AITBC receipt attestation, focusing on practical considerations for integration with existing infrastructure.
|
||||
|
||||
## Technology Options
|
||||
|
||||
### 1. zk-SNARKs (Zero-Knowledge Succinct Non-Interactive Argument of Knowledge)
|
||||
|
||||
**Examples**: Groth16, PLONK, Halo2
|
||||
|
||||
**Pros**:
|
||||
- **Small proof size**: ~200 bytes for Groth16
|
||||
- **Fast verification**: Constant time, ~3ms on-chain
|
||||
- **Mature ecosystem**: circom, snarkjs, bellman, arkworks
|
||||
- **Low gas costs**: ~200k gas for verification on Ethereum
|
||||
- **Industry adoption**: Used by Aztec, Tornado Cash, Zcash
|
||||
|
||||
**Cons**:
|
||||
- **Trusted setup**: Required for Groth16 (toxic waste problem)
|
||||
- **Longer proof generation**: 10-30 seconds depending on circuit size
|
||||
- **Complex setup**: Ceremony needs multiple participants
|
||||
- **Quantum vulnerability**: Not post-quantum secure
|
||||
|
||||
### 2. zk-STARKs (Zero-Knowledge Scalable Transparent Argument of Knowledge)
|
||||
|
||||
**Examples**: STARKEx, Winterfell, gnark
|
||||
|
||||
**Pros**:
|
||||
- **No trusted setup**: Transparent setup process
|
||||
- **Post-quantum secure**: Resistant to quantum attacks
|
||||
- **Faster proving**: Often faster than SNARKs for large circuits
|
||||
- **Transparent**: No toxic waste, fully verifiable setup
|
||||
|
||||
**Cons**:
|
||||
- **Larger proofs**: ~45KB for typical circuits
|
||||
- **Higher verification cost**: ~500k-1M gas on-chain
|
||||
- **Newer ecosystem**: Fewer tools and libraries
|
||||
- **Less adoption**: Limited production deployments
|
||||
|
||||
## Use Case Analysis
|
||||
|
||||
### Receipt Attestation Requirements
|
||||
|
||||
1. **Proof Size**: Important for on-chain storage costs
|
||||
2. **Verification Speed**: Critical for settlement latency
|
||||
3. **Setup Complexity**: Affects deployment timeline
|
||||
4. **Ecosystem Maturity**: Impacts development speed
|
||||
5. **Privacy Needs**: Moderate (hiding amounts, not full anonymity)
|
||||
|
||||
### Quantitative Comparison
|
||||
|
||||
| Metric | Groth16 (SNARK) | PLONK (SNARK) | STARK |
|
||||
|--------|----------------|---------------|-------|
|
||||
| Proof Size | 200 bytes | 400-500 bytes | 45KB |
|
||||
| Prover Time | 10-30s | 5-15s | 2-10s |
|
||||
| Verifier Time | 3ms | 5ms | 50ms |
|
||||
| Gas Cost | 200k | 300k | 800k |
|
||||
| Trusted Setup | Yes | Universal | No |
|
||||
| Library Support | Excellent | Good | Limited |
|
||||
|
||||
## Recommendation
|
||||
|
||||
### Phase 1: Groth16 for MVP
|
||||
|
||||
**Rationale**:
|
||||
1. **Proven technology**: Battle-tested in production
|
||||
2. **Small proofs**: Essential for cost-effective on-chain verification
|
||||
3. **Fast verification**: Critical for settlement performance
|
||||
4. **Tool maturity**: circom + snarkjs ecosystem
|
||||
5. **Community knowledge**: Extensive documentation and examples
|
||||
|
||||
**Mitigations for trusted setup**:
|
||||
- Multi-party ceremony with >100 participants
|
||||
- Public documentation of process
|
||||
- Consider PLONK for Phase 2 if setup becomes bottleneck
|
||||
|
||||
### Phase 2: Evaluate PLONK
|
||||
|
||||
**Rationale**:
|
||||
- Universal trusted setup (one-time for all circuits)
|
||||
- Slightly larger proofs but acceptable
|
||||
- More flexible for circuit updates
|
||||
- Growing ecosystem support
|
||||
|
||||
### Phase 3: Consider STARKs
|
||||
|
||||
**Rationale**:
|
||||
- If quantum resistance becomes priority
|
||||
- If proof size optimizations improve
|
||||
- If gas costs become less critical
|
||||
|
||||
## Implementation Strategy
|
||||
|
||||
### Circuit Complexity Analysis
|
||||
|
||||
**Basic Receipt Circuit**:
|
||||
- Hash verification: ~50 constraints
|
||||
- Signature verification: ~10,000 constraints
|
||||
- Arithmetic operations: ~100 constraints
|
||||
- Total: ~10,150 constraints
|
||||
|
||||
**With Privacy Features**:
|
||||
- Range proofs: ~1,000 constraints
|
||||
- Merkle proofs: ~1,000 constraints
|
||||
- Additional checks: ~500 constraints
|
||||
- Total: ~12,650 constraints
|
||||
|
||||
### Performance Estimates
|
||||
|
||||
**Groth16**:
|
||||
- Setup time: 2-5 hours
|
||||
- Proving time: 5-15 seconds
|
||||
- Verification: 3ms
|
||||
- Proof size: 200 bytes
|
||||
|
||||
**Infrastructure Impact**:
|
||||
- Coordinator: Additional 5-15s per receipt
|
||||
- Settlement layer: Minimal impact (fast verification)
|
||||
- Storage: Negligible increase
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### Trusted Setup Risks
|
||||
|
||||
1. **Toxic Waste**: If compromised, can forge proofs
|
||||
2. **Setup Integrity**: Requires honest participants
|
||||
3. **Documentation**: Must be publicly verifiable
|
||||
|
||||
### Mitigation Strategies
|
||||
|
||||
1. **Multi-party Ceremony**:
|
||||
- Minimum 100 participants
|
||||
- Geographically distributed
|
||||
- Public livestream
|
||||
|
||||
2. **Circuit Audits**:
|
||||
- Formal verification where possible
|
||||
- Third-party security review
|
||||
- Public disclosure of circuits
|
||||
|
||||
3. **Gradual Rollout**:
|
||||
- Start with low-value transactions
|
||||
- Monitor for anomalies
|
||||
- Emergency pause capability
|
||||
|
||||
## Development Plan
|
||||
|
||||
### Week 1-2: Environment Setup
|
||||
- Install circom and snarkjs
|
||||
- Create basic test circuit
|
||||
- Benchmark proof generation
|
||||
|
||||
### Week 3-4: Basic Circuit
|
||||
- Implement receipt hash verification
|
||||
- Add signature verification
|
||||
- Test with sample receipts
|
||||
|
||||
### Week 5-6: Integration
|
||||
- Add to coordinator API
|
||||
- Create verification contract
|
||||
- Test settlement flow
|
||||
|
||||
### Week 7-8: Trusted Setup
|
||||
- Plan ceremony logistics
|
||||
- Prepare ceremony software
|
||||
- Execute multi-party setup
|
||||
|
||||
### Week 9-10: Testing & Audit
|
||||
- End-to-end testing
|
||||
- Security review
|
||||
- Performance optimization
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. **Immediate**: Set up development environment
|
||||
2. **Research**: Deep dive into circom best practices
|
||||
3. **Prototype**: Build minimal viable circuit
|
||||
4. **Evaluate**: Performance with real receipt data
|
||||
5. **Decide**: Final technology choice based on testing
|
||||
Reference in New Issue
Block a user