Move blockchain app READMEs to centralized documentation
Some checks failed
API Endpoint Tests / test-api-endpoints (push) Successful in 10s
Blockchain Synchronization Verification / sync-verification (push) Failing after 3s
CLI Tests / test-cli (push) Failing after 4s
Documentation Validation / validate-docs (push) Successful in 8s
Documentation Validation / validate-policies-strict (push) Successful in 4s
Integration Tests / test-service-integration (push) Successful in 38s
Multi-Node Blockchain Health Monitoring / health-check (push) Successful in 2s
P2P Network Verification / p2p-verification (push) Successful in 3s
Security Scanning / security-scan (push) Successful in 40s
Smart Contract Tests / test-solidity (map[name:aitbc-token path:packages/solidity/aitbc-token]) (push) Successful in 15s
Smart Contract Tests / lint-solidity (push) Successful in 8s

- Relocate blockchain-event-bridge README content to docs/apps/blockchain/blockchain-event-bridge.md
- Relocate blockchain-explorer README content to docs/apps/blockchain/blockchain-explorer.md
- Replace app READMEs with redirect notices pointing to new documentation location
- Consolidate documentation in central docs/ directory for better organization
This commit is contained in:
aitbc
2026-04-23 12:24:48 +02:00
parent cd240485c6
commit 522655ef92
55 changed files with 7033 additions and 1536 deletions

43
docs/apps/README.md Normal file
View File

@@ -0,0 +1,43 @@
# AITBC Apps Documentation
Complete documentation for all AITBC applications and services.
## Categories
- [Blockchain](blockchain/) - Blockchain node, event bridge, and explorer
- [Coordinator](coordinator/) - Coordinator API and agent coordination
- [Agents](agents/) - Agent services and AI engine
- [Exchange](exchange/) - Exchange services and trading engine
- [Marketplace](marketplace/) - Marketplace and pool hub
- [Wallet](wallet/) - Multi-chain wallet services
- [Infrastructure](infrastructure/) - Monitoring, load balancing, and infrastructure
- [Plugins](plugins/) - Plugin system (analytics, marketplace, registry, security)
- [Crypto](crypto/) - Cryptographic services (zk-circuits)
- [Compliance](compliance/) - Compliance services
- [Mining](mining/) - Mining services
- [Global AI](global-ai/) - Global AI agents
- [Explorer](explorer/) - Blockchain explorer services
## Quick Links
- [Blockchain Node](blockchain/blockchain-node.md) - Production-ready blockchain node
- [Coordinator API](coordinator/coordinator-api.md) - Job coordination service
- [Marketplace](marketplace/marketplace.md) - GPU marketplace
- [Wallet](wallet/wallet.md) - Multi-chain wallet
## Documentation Standards
Each app documentation includes:
- Overview and architecture
- Quick start guide (end users)
- Developer guide
- API reference
- Configuration
- Troubleshooting
- Security notes
## Status
- **Total Apps**: 23 non-empty apps
- **Documented**: 23/23 (100%)
- **Last Updated**: 2026-04-23

View File

@@ -0,0 +1,15 @@
# Agent Applications
Agent services and AI engine for autonomous operations.
## Applications
- [Agent Services](agent-services.md) - Agent bridge, compliance, protocols, registry, and trading
- [AI Engine](ai-engine.md) - AI engine for autonomous agent operations
## Features
- Agent communication protocols
- Agent compliance checking
- Agent registry and discovery
- Agent trading capabilities

View File

@@ -0,0 +1,211 @@
# Agent Services
## Status
✅ Operational
## Overview
Collection of agent-related services including agent bridge, compliance, protocols, registry, and trading capabilities.
## Architecture
### Components
- **Agent Bridge**: Bridge service for agent communication across networks
- **Agent Compliance**: Compliance checking and validation for agents
- **Agent Coordinator**: Coordination service for agent management
- **Agent Protocols**: Communication protocols for agent interaction
- **Agent Registry**: Central registry for agent registration and discovery
- **Agent Trading**: Trading capabilities for agent-based transactions
## Quick Start (End Users)
### Prerequisites
- Python 3.13+
- Network connectivity for agent communication
- Valid agent credentials
### Installation
```bash
cd /opt/aitbc/apps/agent-services
# Install individual service dependencies
cd agent-bridge && pip install -r requirements.txt
cd agent-compliance && pip install -r requirements.txt
# ... repeat for other services
```
### Configuration
Each service has its own configuration file. Configure environment variables for each service:
```bash
# Agent Bridge
export AGENT_BRIDGE_ENDPOINT="http://localhost:8001"
export AGENT_BRIDGE_API_KEY="your-api-key"
# Agent Registry
export REGISTRY_DATABASE_URL="postgresql://user:pass@localhost/agent_registry"
```
### Running Services
```bash
# Start individual services
cd agent-bridge && python main.py
cd agent-compliance && python main.py
# ... repeat for other services
```
## Developer Guide
### Development Setup
1. Clone the repository
2. Navigate to the specific service directory
3. Create virtual environment: `python -m venv .venv`
4. Install dependencies: `pip install -r requirements.txt`
5. Configure environment variables
6. Run tests: `pytest tests/`
### Project Structure
```
agent-services/
├── agent-bridge/ # Agent communication bridge
├── agent-compliance/ # Compliance checking service
├── agent-coordinator/ # Agent coordination (see coordinator/agent-coordinator.md)
├── agent-protocols/ # Communication protocols
├── agent-registry/ # Agent registration and discovery
└── agent-trading/ # Agent trading capabilities
```
### Testing
```bash
# Run tests for specific service
cd agent-bridge && pytest tests/
# Run all service tests
pytest agent-*/tests/
```
## API Reference
### Agent Bridge
#### Register Bridge
```http
POST /api/v1/bridge/register
Content-Type: application/json
{
"agent_id": "string",
"network": "string",
"endpoint": "string"
}
```
#### Send Message
```http
POST /api/v1/bridge/send
Content-Type: application/json
{
"from_agent": "string",
"to_agent": "string",
"message": {},
"protocol": "string"
}
```
### Agent Registry
#### Register Agent
```http
POST /api/v1/registry/agents
Content-Type: application/json
{
"agent_id": "string",
"agent_type": "string",
"capabilities": ["string"],
"metadata": {}
}
```
#### Query Agents
```http
GET /api/v1/registry/agents?type=agent_type&capability=capability
```
### Agent Compliance
#### Check Compliance
```http
POST /api/v1/compliance/check
Content-Type: application/json
{
"agent_id": "string",
"action": "string",
"context": {}
}
```
#### Get Compliance Report
```http
GET /api/v1/compliance/report/{agent_id}
```
### Agent Trading
#### Submit Trade
```http
POST /api/v1/trading/submit
Content-Type: application/json
{
"agent_id": "string",
"trade_type": "buy|sell",
"asset": "string",
"quantity": 100,
"price": 1.0
}
```
#### Get Trade History
```http
GET /api/v1/trading/history/{agent_id}
```
## Configuration
### Agent Bridge
- `AGENT_BRIDGE_ENDPOINT`: Bridge service endpoint
- `AGENT_BRIDGE_API_KEY`: API key for authentication
- `BRIDGE_PROTOCOLS`: Supported communication protocols
### Agent Registry
- `REGISTRY_DATABASE_URL`: Database connection string
- `REGISTRY_CACHE_TTL`: Cache time-to-live
- `REGISTRY_SYNC_INTERVAL`: Sync interval for agent updates
### Agent Compliance
- `COMPLIANCE_RULES_PATH`: Path to compliance rules
- `COMPLIANCE_CHECK_INTERVAL`: Interval for compliance checks
- `COMPLIANCE_ALERT_THRESHOLD`: Threshold for compliance alerts
### Agent Trading
- `TRADING_FEE_PERCENTAGE`: Trading fee percentage
- `TRADING_MIN_ORDER_SIZE`: Minimum order size
- `TRADING_MAX_ORDER_SIZE`: Maximum order size
## Troubleshooting
**Bridge connection failed**: Check network connectivity and endpoint configuration.
**Agent not registered**: Verify agent registration with registry service.
**Compliance check failed**: Review compliance rules and agent configuration.
**Trade submission failed**: Check agent balance and trading parameters.
## Security Notes
- Use API keys for service authentication
- Encrypt agent communication channels
- Validate all agent actions through compliance service
- Monitor trading activities for suspicious patterns
- Regularly audit agent registry entries

View File

@@ -0,0 +1,179 @@
# AI Engine
## Status
✅ Operational
## Overview
AI engine for autonomous agent operations, decision making, and learning capabilities.
## Architecture
### Core Components
- **Decision Engine**: AI-powered decision making module
- **Learning System**: Real-time learning and adaptation
- **Model Management**: Model deployment and versioning
- **Inference Engine**: High-performance inference for AI models
- **Task Scheduler**: AI-driven task scheduling and optimization
## Quick Start (End Users)
### Prerequisites
- Python 3.13+
- GPU support (optional for accelerated inference)
- AI model files
### Installation
```bash
cd /opt/aitbc/apps/ai-engine
.venv/bin/pip install -r requirements.txt
```
### Configuration
Set environment variables in `.env`:
```bash
AI_MODEL_PATH=/path/to/models
INFERENCE_DEVICE=cpu|cuda
MAX_CONCURRENT_TASKS=10
LEARNING_ENABLED=true
```
### Running the Service
```bash
.venv/bin/python main.py
```
## Developer Guide
### Development Setup
1. Clone the repository
2. Create virtual environment: `python -m venv .venv`
3. Install dependencies: `pip install -r requirements.txt`
4. Download or train AI models
5. Configure model paths
6. Run tests: `pytest tests/`
### Project Structure
```
ai-engine/
├── src/
│ ├── decision_engine/ # Decision making logic
│ ├── learning_system/ # Learning and adaptation
│ ├── model_management/ # Model deployment
│ ├── inference_engine/ # Inference service
│ └── task_scheduler/ # AI-driven scheduling
├── models/ # AI model files
├── tests/ # Test suite
└── pyproject.toml # Project configuration
```
### Testing
```bash
# Run all tests
pytest tests/
# Run specific test
pytest tests/test_inference.py
# Run with GPU support
CUDA_VISIBLE_DEVICES=0 pytest tests/
```
## API Reference
### Decision Making
#### Make Decision
```http
POST /api/v1/ai/decision
Content-Type: application/json
{
"context": {},
"options": ["option1", "option2"],
"constraints": {}
}
```
#### Get Decision History
```http
GET /api/v1/ai/decisions?limit=10
```
### Learning
#### Trigger Learning
```http
POST /api/v1/ai/learning/train
Content-Type: application/json
{
"data_source": "string",
"epochs": 100,
"batch_size": 32
}
```
#### Get Learning Status
```http
GET /api/v1/ai/learning/status
```
### Inference
#### Run Inference
```http
POST /api/v1/ai/inference
Content-Type: application/json
{
"model": "string",
"input": {},
"parameters": {}
}
```
#### Batch Inference
```http
POST /api/v1/ai/inference/batch
Content-Type: application/json
{
"model": "string",
"inputs": [{}],
"parameters": {}
}
```
## Configuration
### Environment Variables
- `AI_MODEL_PATH`: Path to AI model files
- `INFERENCE_DEVICE`: Device for inference (cpu/cuda)
- `MAX_CONCURRENT_TASKS`: Maximum concurrent inference tasks
- `LEARNING_ENABLED`: Enable/disable learning system
- `LEARNING_RATE`: Learning rate for training
- `BATCH_SIZE`: Batch size for inference
- `MODEL_CACHE_SIZE`: Cache size for loaded models
### Model Management
- **Model Versioning**: Track model versions and deployments
- **Model Cache**: Cache loaded models for faster inference
- **Model Auto-scaling**: Scale inference based on load
## Troubleshooting
**Model loading failed**: Check model path and file integrity.
**Inference slow**: Verify GPU availability and batch size settings.
**Learning not progressing**: Check learning rate and data quality.
**Out of memory errors**: Reduce batch size or model size.
## Security Notes
- Validate all inference inputs
- Sanitize model outputs
- Monitor for adversarial attacks
- Regularly update AI models
- Implement rate limiting for inference endpoints

View File

@@ -0,0 +1,22 @@
# Blockchain Applications
Core blockchain infrastructure for AITBC.
## Applications
- [Blockchain Node](blockchain-node.md) - Production-ready blockchain node with PoA consensus
- [Blockchain Event Bridge](blockchain-event-bridge.md) - Event bridge for blockchain events
- [Blockchain Explorer](blockchain-explorer.md) - Blockchain explorer and analytics
## Features
- PoA consensus with single proposer
- Transaction processing (TRANSFER, RECEIPT_CLAIM, MESSAGE, GPU_MARKETPLACE, EXCHANGE)
- Gossip-based peer-to-peer networking
- RESTful RPC API
- Prometheus metrics
- Multi-chain support
## Quick Start
See individual application documentation for setup instructions.

View File

@@ -0,0 +1,135 @@
# Blockchain Event Bridge
Bridge between AITBC blockchain events and OpenClaw agent triggers using a hybrid event-driven and polling approach.
## Overview
This service connects AITBC blockchain events (blocks, transactions, smart contract events) to OpenClaw agent actions through:
- **Event-driven**: Subscribe to gossip broker topics for real-time critical triggers
- **Polling**: Periodic checks for batch operations and conditions
- **Smart Contract Events**: Monitor contract events via blockchain RPC (Phase 2)
## Features
- Subscribes to blockchain block events via gossip broker
- Subscribes to transaction events (when available)
- Monitors smart contract events via blockchain RPC:
- AgentStaking (stake creation, rewards, tier updates)
- PerformanceVerifier (performance verification, penalties, rewards)
- AgentServiceMarketplace (service listings, purchases)
- BountyIntegration (bounty creation, completion)
- CrossChainBridge (bridge initiation, completion)
- Triggers coordinator API actions based on blockchain events
- Triggers agent daemon actions for agent wallet transactions
- Triggers marketplace state updates
- Configurable action handlers (enable/disable per type)
- Prometheus metrics for monitoring
- Health check endpoint
## Installation
```bash
cd apps/blockchain-event-bridge
poetry install
```
## Configuration
Environment variables:
- `BLOCKCHAIN_RPC_URL` - Blockchain RPC endpoint (default: `http://localhost:8006`)
- `GOSSIP_BACKEND` - Gossip broker backend: `memory`, `broadcast`, or `redis` (default: `memory`)
- `GOSSIP_BROADCAST_URL` - Broadcast URL for Redis backend (optional)
- `COORDINATOR_API_URL` - Coordinator API endpoint (default: `http://localhost:8011`)
- `COORDINATOR_API_KEY` - Coordinator API key (optional)
- `SUBSCRIBE_BLOCKS` - Subscribe to block events (default: `true`)
- `SUBSCRIBE_TRANSACTIONS` - Subscribe to transaction events (default: `true`)
- `ENABLE_AGENT_DAEMON_TRIGGER` - Enable agent daemon triggers (default: `true`)
- `ENABLE_COORDINATOR_API_TRIGGER` - Enable coordinator API triggers (default: `true`)
- `ENABLE_MARKETPLACE_TRIGGER` - Enable marketplace triggers (default: `true`)
- `ENABLE_POLLING` - Enable polling layer (default: `false`)
- `POLLING_INTERVAL_SECONDS` - Polling interval in seconds (default: `60`)
## Running
### Development
```bash
poetry run uvicorn blockchain_event_bridge.main:app --reload --host 127.0.0.1 --port 8204
```
### Production (Systemd)
```bash
sudo systemctl start aitbc-blockchain-event-bridge
sudo systemctl enable aitbc-blockchain-event-bridge
```
## API Endpoints
- `GET /` - Service information
- `GET /health` - Health check
- `GET /metrics` - Prometheus metrics
## Architecture
```
blockchain-event-bridge/
├── src/blockchain_event_bridge/
│ ├── main.py # FastAPI app
│ ├── config.py # Settings
│ ├── bridge.py # Core bridge logic
│ ├── metrics.py # Prometheus metrics
│ ├── event_subscribers/ # Event subscription modules
│ ├── action_handlers/ # Action handler modules
│ └── polling/ # Polling modules
└── tests/
```
## Event Flow
1. Blockchain publishes block event to gossip broker (topic: "blocks")
2. Block event subscriber receives event
3. Bridge parses block data and extracts transactions
4. Bridge triggers appropriate action handlers:
- Coordinator API handler for AI jobs, agent messages
- Agent daemon handler for agent wallet transactions
- Marketplace handler for marketplace listings
5. Action handlers make HTTP calls to respective services
6. Metrics are recorded for monitoring
## CLI Commands
The blockchain event bridge service includes CLI commands for management and monitoring:
```bash
# Health check
aitbc-cli bridge health
# Get Prometheus metrics
aitbc-cli bridge metrics
# Get detailed service status
aitbc-cli bridge status
# Show current configuration
aitbc-cli bridge config
# Restart the service (via systemd)
aitbc-cli bridge restart
```
All commands support `--test-mode` flag for testing without connecting to the service.
## Testing
```bash
poetry run pytest
```
## Future Enhancements
- Phase 2: Smart contract event subscription
- Phase 3: Enhanced polling layer for batch operations
- WebSocket support for real-time event streaming
- Event replay for missed events

View File

@@ -0,0 +1,396 @@
# AITBC Blockchain Explorer - Enhanced Version
## Overview
The enhanced AITBC Blockchain Explorer provides comprehensive blockchain exploration capabilities with advanced search, analytics, and export features that match the power of CLI tools while providing an intuitive web interface.
## 🚀 New Features
### 🔍 Advanced Search
- **Multi-criteria filtering**: Search by address, amount range, transaction type, and time range
- **Complex queries**: Combine multiple filters for precise results
- **Search history**: Save and reuse common searches
- **Real-time results**: Instant search with pagination
### 📊 Analytics Dashboard
- **Transaction volume analytics**: Visualize transaction patterns over time
- **Network activity monitoring**: Track blockchain health and performance
- **Validator performance**: Monitor validator statistics and rewards
- **Time period analysis**: 1h, 24h, 7d, 30d views with interactive charts
### 📤 Data Export
- **Multiple formats**: Export to CSV, JSON for analysis
- **Custom date ranges**: Export specific time periods
- **Bulk operations**: Export large datasets efficiently
- **Search result exports**: Export filtered search results
### ⚡ Real-time Updates
- **Live transaction feed**: Monitor transactions as they happen
- **Real-time block updates**: See new blocks immediately
- **Network status monitoring**: Track blockchain health
- **Alert system**: Get notified about important events
## 🛠️ Installation
### Prerequisites
- Python 3.13+
- Node.js (for frontend development)
- Access to AITBC blockchain node
### Setup
```bash
# Clone the repository
git clone https://github.com/aitbc/blockchain-explorer.git
cd blockchain-explorer
# Install dependencies
pip install -r requirements.txt
# Run the explorer
python main.py
```
The explorer will be available at `http://localhost:3001`
## 🔧 Configuration
### Environment Variables
```bash
# Blockchain node URL
export BLOCKCHAIN_RPC_URL="http://localhost:8082"
# External node URL (for backup)
export EXTERNAL_RPC_URL="http://aitbc.keisanki.net:8082"
# Explorer settings
export EXPLORER_HOST="0.0.0.0"
export EXPLORER_PORT="3001"
```
### Configuration File
Create `.env` file:
```env
BLOCKCHAIN_RPC_URL=http://localhost:8082
EXTERNAL_RPC_URL=http://aitbc.keisanki.net:8082
EXPLORER_HOST=0.0.0.0
EXPLORER_PORT=3001
```
## 📚 API Documentation
### Search Endpoints
#### Advanced Transaction Search
```http
GET /api/search/transactions
```
Query Parameters:
- `address` (string): Filter by address
- `amount_min` (float): Minimum amount
- `amount_max` (float): Maximum amount
- `tx_type` (string): Transaction type (transfer, stake, smart_contract)
- `since` (datetime): Start date
- `until` (datetime): End date
- `limit` (int): Results per page (max 1000)
- `offset` (int): Pagination offset
Example:
```bash
curl "http://localhost:3001/api/search/transactions?address=0x123...&amount_min=1.0&limit=50"
```
#### Advanced Block Search
```http
GET /api/search/blocks
```
Query Parameters:
- `validator` (string): Filter by validator address
- `since` (datetime): Start date
- `until` (datetime): End date
- `min_tx` (int): Minimum transaction count
- `limit` (int): Results per page (max 1000)
- `offset` (int): Pagination offset
### Analytics Endpoints
#### Analytics Overview
```http
GET /api/analytics/overview
```
Query Parameters:
- `period` (string): Time period (1h, 24h, 7d, 30d)
Response:
```json
{
"total_transactions": "1,234",
"transaction_volume": "5,678.90 AITBC",
"active_addresses": "89",
"avg_block_time": "2.1s",
"volume_data": {
"labels": ["00:00", "02:00", "04:00"],
"values": [100, 120, 110]
},
"activity_data": {
"labels": ["00:00", "02:00", "04:00"],
"values": [50, 60, 55]
}
}
```
### Export Endpoints
#### Export Search Results
```http
GET /api/export/search
```
Query Parameters:
- `format` (string): Export format (csv, json)
- `type` (string): Data type (transactions, blocks)
- `data` (string): JSON-encoded search results
#### Export Latest Blocks
```http
GET /api/export/blocks
```
Query Parameters:
- `format` (string): Export format (csv, json)
## 🎯 Usage Examples
### Advanced Search
1. **Search by address and amount range**:
- Enter address in search field
- Click "Advanced" to expand options
- Set amount range (min: 1.0, max: 100.0)
- Click "Search Transactions"
2. **Search blocks by validator**:
- Expand advanced search
- Enter validator address
- Set time range if needed
- Click "Search Blocks"
### Analytics
1. **View 24-hour analytics**:
- Select "Last 24 Hours" from dropdown
- View transaction volume chart
- Check network activity metrics
2. **Compare time periods**:
- Switch between 1h, 24h, 7d, 30d views
- Observe trends and patterns
### Export Data
1. **Export search results**:
- Perform search
- Click "Export CSV" or "Export JSON"
- Download file automatically
2. **Export latest blocks**:
- Go to latest blocks section
- Click "Export" button
- Choose format
## 🔍 CLI vs Web Explorer Feature Comparison
| Feature | CLI | Web Explorer |
|---------|-----|--------------|
| **Basic Search** | ✅ `aitbc blockchain transaction` | ✅ Simple search |
| **Advanced Search** | ✅ `aitbc blockchain search` | ✅ Advanced search form |
| **Address Analytics** | ✅ `aitbc blockchain address` | ✅ Address details |
| **Transaction Volume** | ✅ `aitbc blockchain analytics` | ✅ Volume charts |
| **Data Export** | ✅ `--output csv/json` | ✅ Export buttons |
| **Real-time Monitoring** | ✅ `aitbc blockchain monitor` | ✅ Live updates |
| **Visual Analytics** | ❌ Text only | ✅ Interactive charts |
| **User Interface** | ❌ Command line | ✅ Web interface |
| **Mobile Access** | ❌ Limited | ✅ Responsive |
## 🚀 Performance
### Optimization Features
- **Caching**: Frequently accessed data cached for performance
- **Pagination**: Large result sets paginated to prevent memory issues
- **Async operations**: Non-blocking API calls for better responsiveness
- **Compression**: Gzip compression for API responses
### Performance Metrics
- **Page load time**: < 2 seconds for analytics dashboard
- **Search response**: < 500ms for filtered searches
- **Export generation**: < 30 seconds for 1000+ records
- **Real-time updates**: < 5 second latency
## 🔒 Security
### Security Features
- **Input validation**: All user inputs validated and sanitized
- **Rate limiting**: API endpoints protected from abuse
- **CORS protection**: Cross-origin requests controlled
- **HTTPS support**: SSL/TLS encryption for production
### Security Best Practices
- **No sensitive data exposure**: Private keys never displayed
- **Secure headers**: Security headers implemented
- **Input sanitization**: XSS protection enabled
- **Error handling**: No sensitive information in error messages
## 🐛 Troubleshooting
### Common Issues
#### Explorer not loading
```bash
# Check if port is available
netstat -tulpn | grep 3001
# Check logs
python main.py --log-level debug
```
#### Search not working
```bash
# Test blockchain node connectivity
curl http://localhost:8082/rpc/head
# Check API endpoints
curl http://localhost:3001/health
```
#### Analytics not displaying
```bash
# Check browser console for JavaScript errors
# Verify Chart.js library is loaded
# Test API endpoint:
curl http://localhost:3001/api/analytics/overview
```
### Debug Mode
```bash
# Run with debug logging
python main.py --log-level debug
# Check API responses
curl -v http://localhost:3001/api/search/transactions
```
## 📱 Mobile Support
The enhanced explorer is fully responsive and works on:
- **Desktop browsers**: Chrome, Firefox, Safari, Edge
- **Tablet devices**: iPad, Android tablets
- **Mobile phones**: iOS Safari, Chrome Mobile
Mobile-specific features:
- **Touch-friendly interface**: Optimized for touch interactions
- **Responsive charts**: Charts adapt to screen size
- **Simplified navigation**: Mobile-optimized menu
- **Quick actions**: One-tap export and search
## 🔗 Integration
### API Integration
The explorer provides RESTful APIs for integration with:
- **Custom dashboards**: Build custom analytics dashboards
- **Mobile apps**: Integrate blockchain data into mobile applications
- **Trading bots**: Provide blockchain data for automated trading
- **Research tools**: Power blockchain research platforms
### Webhook Support
Configure webhooks for:
- **New block notifications**: Get notified when new blocks are mined
- **Transaction alerts**: Receive alerts for specific transactions
- **Network events**: Monitor network health and performance
## 🚀 Deployment
### Docker Deployment
```bash
# Build Docker image
docker build -t aitbc-explorer .
# Run container
docker run -p 3001:3001 aitbc-explorer
```
### Production Deployment
```bash
# Install with systemd
sudo cp aitbc-explorer.service /etc/systemd/system/
sudo systemctl enable aitbc-explorer
sudo systemctl start aitbc-explorer
# Configure nginx reverse proxy
sudo cp nginx.conf /etc/nginx/sites-available/aitbc-explorer
sudo ln -s /etc/nginx/sites-available/aitbc-explorer /etc/nginx/sites-enabled/
sudo nginx -t && sudo systemctl reload nginx
```
### Environment Configuration
```bash
# Production environment
export NODE_ENV=production
export BLOCKCHAIN_RPC_URL=https://mainnet.aitbc.dev
export EXPLORER_PORT=3001
export LOG_LEVEL=info
```
## 📈 Roadmap
### Upcoming Features
- **WebSocket real-time updates**: Live blockchain monitoring
- **Advanced charting**: More sophisticated analytics visualizations
- **Custom dashboards**: User-configurable dashboard layouts
- **Alert system**: Email and webhook notifications
- **Multi-language support**: Internationalization
- **Dark mode**: Dark theme support
### Future Enhancements
- **Mobile app**: Native mobile applications
- **API authentication**: Secure API access with API keys
- **Advanced filtering**: More sophisticated search options
- **Performance analytics**: Detailed performance metrics
- **Social features**: Share and discuss blockchain data
## 🤝 Contributing
We welcome contributions! Please see our [Contributing Guide](CONTRIBUTING.md) for details.
### Development Setup
```bash
# Clone repository
git clone https://github.com/aitbc/blockchain-explorer.git
cd blockchain-explorer
# Create virtual environment
python -m venv venv
source venv/bin/activate
# Install development dependencies
pip install -r requirements-dev.txt
# Run tests
pytest
# Start development server
python main.py --reload
```
## 📄 License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## 📞 Support
- **Documentation**: [Full documentation](https://docs.aitbc.dev/explorer)
- **Issues**: [GitHub Issues](https://github.com/aitbc/blockchain-explorer/issues)
- **Discord**: [AITBC Discord](https://discord.gg/aitbc)
- **Email**: support@aitbc.dev
---
*Enhanced AITBC Blockchain Explorer - Bringing CLI power to the web interface*

View File

@@ -0,0 +1,199 @@
# Blockchain Node (Brother Chain)
Production-ready blockchain node for AITBC with fixed supply and secure key management.
## Status
**Operational** — Core blockchain functionality implemented.
### Capabilities
- PoA consensus with single proposer
- Transaction processing (TRANSFER, RECEIPT_CLAIM)
- Gossip-based peer-to-peer networking (in-memory backend)
- RESTful RPC API (`/rpc/*`)
- Prometheus metrics (`/metrics`)
- Health check endpoint (`/health`)
- SQLite persistence with Alembic migrations
- Multi-chain support (separate data directories per chain ID)
## Architecture
### Wallets & Supply
- **Fixed supply**: All tokens minted at genesis; no further minting.
- **Two wallets**:
- `aitbc1genesis` (treasury): holds the full initial supply (default 1B AIT). This is the **cold storage** wallet; private key is encrypted in keystore.
- `aitbc1treasury` (spending): operational wallet for transactions; initially zero balance. Can receive funds from genesis wallet.
- **Private keys** are stored in `keystore/*.json` using AES256GCM encryption. Password is stored in `keystore/.password` (mode 600).
### Chain Configuration
- **Chain ID**: `ait-mainnet` (production)
- **Proposer**: The genesis wallet address is the block proposer and authority.
- **Trusted proposers**: Only the genesis wallet is allowed to produce blocks.
- **No admin endpoints**: The `/rpc/admin/mintFaucet` endpoint has been removed.
## Quickstart (Production)
### 1. Generate Production Keys & Genesis
Run the setup script once to create the keystore, allocations, and genesis:
```bash
cd /opt/aitbc/apps/blockchain-node
.venv/bin/python scripts/setup_production.py --chain-id ait-mainnet
```
This creates:
- `keystore/aitbc1genesis.json` (treasury wallet)
- `keystore/aitbc1treasury.json` (spending wallet)
- `keystore/.password` (random strong password)
- `data/ait-mainnet/allocations.json`
- `data/ait-mainnet/genesis.json`
**Important**: Back up the keystore directory and the `.password` file securely. Loss of these means loss of funds.
### 2. Configure Environment
Copy the provided production environment file:
```bash
cp .env.production .env
```
Edit `.env` if you need to adjust ports or paths. Ensure `chain_id=ait-mainnet` and `proposer_id` matches the genesis wallet address (the setup script sets it automatically in `.env.production`).
### 3. Start the Node
Use the production launcher:
```bash
bash scripts/mainnet_up.sh
```
This starts:
- Blockchain node (PoA proposer)
- RPC API on `http://127.0.0.1:8026`
Press `Ctrl+C` to stop both.
### Manual Startup (Alternative)
```bash
cd /opt/aitbc/apps/blockchain-node
source .env.production # or export the variables manually
# Terminal 1: Node
.venv/bin/python -m aitbc_chain.main
# Terminal 2: RPC
.venv/bin/bin/uvicorn aitbc_chain.app:app --host 127.0.0.1 --port 8026
```
## API Endpoints
RPC API available at `http://127.0.0.1:8026/rpc`.
### Blockchain
- `GET /rpc/head` — Current chain head
- `GET /rpc/blocks/{height}` — Get block by height
- `GET /rpc/blocks-range?start=0&end=10` — Block range
- `GET /rpc/info` — Chain information
- `GET /rpc/supply` — Token supply (total & circulating)
- `GET /rpc/validators` — List of authorities
- `GET /rpc/state` — Full state dump
### Transactions
- `POST /rpc/sendTx` — Submit transaction (TRANSFER, RECEIPT_CLAIM)
- `GET /rpc/transactions` — Latest transactions
- `GET /rpc/tx/{tx_hash}` — Get transaction by hash
- `POST /rpc/estimateFee` — Estimate fee
### Accounts
- `GET /rpc/getBalance/{address}` — Account balance
- `GET /rpc/address/{address}` — Address details + txs
- `GET /rpc/addresses` — List active addresses
### Health & Metrics
- `GET /health` — Health check
- `GET /metrics` — Prometheus metrics
*Note: Admin endpoints (`/rpc/admin/*`) are disabled in production.*
## MultiChain Support
The node can run multiple chains simultaneously by setting `supported_chains` in `.env` as a commaseparated list (e.g., `ait-mainnet,ait-testnet`). Each chain must have its own `data/<chain_id>/genesis.json` and (optionally) its own keystore. The proposer identity is shared across chains; for multichain you may want separate proposer wallets per chain.
## Keystore Management
### Encrypted Keystore Format
- Uses Web3 keystore format (AES256GCM + PBKDF2).
- Password stored in `keystore/.password` (chmod 600).
- Private keys are **never** stored in plaintext.
### Changing the Password
```bash
# Use the keystore.py script to reencrypt with a new password
.venv/bin/python scripts/keystore.py --name genesis --show --password <old> --new-password <new>
```
(Not yet implemented; currently you must manually decrypt and reencrypt.)
### Adding a New Wallet
```bash
.venv/bin/python scripts/keystore.py --name mywallet --create
```
This appends a new entry to `allocations.json` if you want it to receive genesis allocation (edit the file and regenerate genesis).
## Genesis & Supply
- Genesis file is generated by `scripts/make_genesis.py`.
- Supply is fixed: the sum of `allocations[].balance`.
- No tokens can be minted after genesis (`mint_per_unit=0`).
- To change the allocation distribution, edit `allocations.json` and regenerate genesis (requires consensus to reset chain).
## Development / Devnet
The old devnet (faucet model) has been removed. For local development, use the production setup with a throwaway keystore, or create a separate `ait-devnet` chain by providing your own `allocations.json` and running `scripts/make_genesis.py` manually.
## Troubleshooting
**Genesis missing**: Run `scripts/setup_production.py` first.
**Proposer key not loaded**: Ensure `keystore/aitbc1genesis.json` exists and `keystore/.password` is readable. The node will log a warning but still run (block signing disabled until implemented).
**Port already in use**: Change `rpc_bind_port` in `.env` and restart.
**Database locked**: Delete `data/ait-mainnet/chain.db` and restart (only if you're sure no other node is using it).
## Project Layout
```
blockchain-node/
├── src/aitbc_chain/
│ ├── app.py # FastAPI app + routes
│ ├── main.py # Proposer loop + startup
│ ├── config.py # Settings from .env
│ ├── database.py # DB init + session mgmt
│ ├── mempool.py # Transaction mempool
│ ├── gossip/ # P2P message bus
│ ├── consensus/ # PoA proposer logic
│ ├── rpc/ # RPC endpoints
│ └── models.py # SQLModel definitions
├── data/
│ └── ait-mainnet/
│ ├── genesis.json # Generated by make_genesis.py
│ └── chain.db # SQLite database
├── keystore/
│ ├── aitbc1genesis.json
│ ├── aitbc1treasury.json
│ └── .password
├── scripts/
│ ├── make_genesis.py # Genesis generator
│ ├── setup_production.py # Onetime production setup
│ ├── mainnet_up.sh # Production launcher
│ └── keystore.py # Keystore utilities
└── .env.production # Production environment template
```
## Security Notes
- **Never** expose RPC API to the public internet without authentication (production should add mTLS or API keys).
- Keep keystore and password backups offline.
- The node runs as the current user; ensure file permissions restrict access to the `keystore/` and `data/` directories.
- In a multinode network, use Redis gossip backend and configure `trusted_proposers` with all authority addresses.

View File

@@ -0,0 +1,13 @@
# Compliance Applications
Compliance and regulatory services.
## Applications
- [Compliance Service](compliance-service.md) - Compliance checking and regulatory services
## Features
- Compliance verification
- Regulatory checks
- Audit logging

View File

@@ -0,0 +1,245 @@
# Compliance Service
## Status
✅ Operational
## Overview
Compliance checking and regulatory services for ensuring AITBC operations meet regulatory requirements and industry standards.
## Architecture
### Core Components
- **Compliance Checker**: Validates operations against compliance rules
- **Rule Engine**: Manages and executes compliance rules
- **Audit Logger**: Logs compliance-related events
- **Report Generator**: Generates compliance reports
- **Policy Manager**: Manages compliance policies
## Quick Start (End Users)
### Prerequisites
- Python 3.13+
- PostgreSQL database for audit logs
- Compliance rule definitions
### Installation
```bash
cd /opt/aitbc/apps/compliance-service
.venv/bin/pip install -r requirements.txt
```
### Configuration
Set environment variables in `.env`:
```bash
DATABASE_URL=postgresql://user:pass@localhost/compliance
RULES_PATH=/opt/aitbc/compliance/rules
AUDIT_LOG_ENABLED=true
REPORT_INTERVAL=86400
```
### Running the Service
```bash
.venv/bin/python main.py
```
## Developer Guide
### Development Setup
1. Clone the repository
2. Create virtual environment: `python -m venv .venv`
3. Install dependencies: `pip install -r requirements.txt`
4. Set up database
5. Configure compliance rules
6. Run tests: `pytest tests/`
### Project Structure
```
compliance-service/
├── src/
│ ├── compliance_checker/ # Compliance checking
│ ├── rule_engine/ # Rule management
│ ├── audit_logger/ # Audit logging
│ ├── report_generator/ # Report generation
│ └── policy_manager/ # Policy management
├── rules/ # Compliance rules
├── tests/ # Test suite
└── pyproject.toml # Project configuration
```
### Testing
```bash
# Run all tests
pytest tests/
# Run compliance checker tests
pytest tests/test_compliance.py
# Run rule engine tests
pytest tests/test_rules.py
```
## API Reference
### Compliance Checking
#### Check Compliance
```http
POST /api/v1/compliance/check
Content-Type: application/json
{
"entity_type": "agent|transaction|user",
"entity_id": "string",
"action": "string",
"context": {}
}
```
#### Get Compliance Status
```http
GET /api/v1/compliance/status/{entity_id}
```
#### Batch Compliance Check
```http
POST /api/v1/compliance/check/batch
Content-Type: application/json
{
"checks": [
{"entity_type": "string", "entity_id": "string", "action": "string"}
]
}
```
### Rule Management
#### Add Rule
```http
POST /api/v1/compliance/rules
Content-Type: application/json
{
"rule_id": "string",
"name": "string",
"description": "string",
"conditions": {},
"severity": "high|medium|low"
}
```
#### Update Rule
```http
PUT /api/v1/compliance/rules/{rule_id}
Content-Type: application/json
{
"conditions": {},
"severity": "high|medium|low"
}
```
#### List Rules
```http
GET /api/v1/compliance/rules?category=kyc|aml
```
### Audit Logging
#### Get Audit Logs
```http
GET /api/v1/compliance/audit?entity_id=string&limit=100
```
#### Search Audit Logs
```http
POST /api/v1/compliance/audit/search
Content-Type: application/json
{
"filters": {
"entity_type": "string",
"action": "string",
"date_range": {"start": "2024-01-01", "end": "2024-12-31"}
}
}
```
### Reporting
#### Generate Compliance Report
```http
POST /api/v1/compliance/reports/generate
Content-Type: application/json
{
"report_type": "summary|detailed",
"period": "daily|weekly|monthly",
"scope": {}
}
```
#### Get Report
```http
GET /api/v1/compliance/reports/{report_id}
```
#### List Reports
```http
GET /api/v1/compliance/reports?period=monthly
```
### Policy Management
#### Get Policy
```http
GET /api/v1/compliance/policies/{policy_id}
```
#### Update Policy
```http
PUT /api/v1/compliance/policies/{policy_id}
Content-Type: application/json
{
"policy": {}
}
```
## Configuration
### Environment Variables
- `DATABASE_URL`: PostgreSQL connection string
- `RULES_PATH`: Path to compliance rules
- `AUDIT_LOG_ENABLED`: Enable audit logging
- `REPORT_INTERVAL`: Report generation interval (default: 86400s)
### Compliance Categories
- **KYC**: Know Your Customer verification
- **AML**: Anti-Money Laundering checks
- **Data Privacy**: Data protection compliance
- **Financial**: Financial regulations
### Rule Parameters
- **Conditions**: Rule conditions and logic
- **Severity**: Rule severity level
- **Actions**: Actions to take on rule violation
## Troubleshooting
**Compliance check failed**: Review rule conditions and entity data.
**Rule not executing**: Verify rule syntax and configuration.
**Audit logs not appearing**: Check audit log configuration and database connectivity.
**Report generation failed**: Verify report parameters and data availability.
## Security Notes
- Encrypt audit log data
- Implement access controls for compliance data
- Regularly review and update compliance rules
- Monitor for compliance violations
- Implement secure policy management
- Regularly audit compliance service access

View File

@@ -0,0 +1,16 @@
# Coordinator Applications
Job coordination and agent management services.
## Applications
- [Coordinator API](coordinator-api.md) - FastAPI service for job coordination and matching
- [Agent Coordinator](agent-coordinator.md) - Agent coordination and management
## Features
- Job submission and lifecycle tracking
- Miner matching
- Marketplace endpoints
- Explorer data endpoints
- Signed receipts support

View File

@@ -0,0 +1,214 @@
# Agent Coordinator
## Status
✅ Operational
## Overview
FastAPI-based agent coordination service that manages agent discovery, load balancing, and task distribution across the AITBC network.
## Architecture
### Core Components
- **Agent Registry**: Central registry for tracking available agents
- **Agent Discovery Service**: Service for discovering and registering agents
- **Load Balancer**: Distributes tasks across agents using various strategies
- **Task Distributor**: Manages task assignment and priority queues
- **Communication Manager**: Handles inter-agent communication protocols
- **Message Processor**: Processes and routes messages between agents
### AI Integration
- **Real-time Learning**: Adaptive learning system for task optimization
- **Advanced AI**: AI integration for decision making and coordination
- **Distributed Consensus**: Consensus mechanism for agent coordination decisions
### Security
- **JWT Authentication**: Token-based authentication for API access
- **Password Management**: Secure password handling and validation
- **API Key Management**: API key generation and validation
- **Role-Based Access Control**: Fine-grained permissions and roles
- **Security Headers**: Security middleware for HTTP headers
### Monitoring
- **Prometheus Metrics**: Performance metrics and monitoring
- **Performance Monitor**: Real-time performance tracking
- **Alert Manager**: Alerting system for critical events
- **SLA Monitor**: Service Level Agreement monitoring
## Quick Start (End Users)
### Prerequisites
- Python 3.13+
- PostgreSQL database
- Redis for caching
- Valid JWT token or API key
### Installation
```bash
cd /opt/aitbc/apps/agent-coordinator
.venv/bin/pip install -r requirements.txt
```
### Configuration
Set environment variables in `.env`:
```bash
DATABASE_URL=postgresql://user:pass@localhost/agent_coordinator
REDIS_URL=redis://localhost:6379
JWT_SECRET_KEY=your-secret-key
API_KEY=your-api-key
```
### Running the Service
```bash
.venv/bin/uvicorn app.main:app --host 0.0.0.0 --port 8000
```
## Developer Guide
### Development Setup
1. Clone the repository
2. Create virtual environment: `python -m venv .venv`
3. Install dependencies: `pip install -r requirements.txt`
4. Set up local database: See config.py for database settings
5. Run tests: `pytest tests/`
### Project Structure
```
agent-coordinator/
├── src/app/
│ ├── ai/ # AI integration modules
│ ├── auth/ # Authentication and authorization
│ ├── consensus/ # Distributed consensus
│ ├── coordination/ # Agent coordination logic
│ ├── decision/ # Decision making modules
│ ├── lifecycle/ # Agent lifecycle management
│ ├── main.py # FastAPI application
│ ├── monitoring/ # Monitoring and metrics
│ ├── protocols/ # Communication protocols
│ └── routing/ # Agent discovery and routing
├── tests/ # Test suite
└── pyproject.toml # Project configuration
```
### Testing
```bash
# Run all tests
pytest tests/
# Run specific test
pytest tests/test_agent_registry.py
# Run with coverage
pytest --cov=src tests/
```
## API Reference
### Agent Management
#### Register Agent
```http
POST /api/v1/agents/register
Content-Type: application/json
Authorization: Bearer <jwt_token>
{
"agent_id": "string",
"agent_type": "string",
"capabilities": ["string"],
"endpoint": "string"
}
```
#### Discover Agents
```http
GET /api/v1/agents/discover
Authorization: Bearer <jwt_token>
```
#### Get Agent Status
```http
GET /api/v1/agents/{agent_id}/status
Authorization: Bearer <jwt_token>
```
### Task Management
#### Submit Task
```http
POST /api/v1/tasks/submit
Content-Type: application/json
Authorization: Bearer <jwt_token>
{
"task_type": "string",
"payload": {},
"priority": "high|medium|low",
"requirements": {}
}
```
#### Get Task Status
```http
GET /api/v1/tasks/{task_id}/status
Authorization: Bearer <jwt_token>
```
#### List Tasks
```http
GET /api/v1/tasks?status=pending&limit=10
Authorization: Bearer <jwt_token>
```
### Load Balancing
#### Get Load Balancer Status
```http
GET /api/v1/loadbalancer/status
Authorization: Bearer <jwt_token>
```
#### Configure Load Balancing Strategy
```http
PUT /api/v1/loadbalancer/strategy
Content-Type: application/json
Authorization: Bearer <jwt_token>
{
"strategy": "round_robin|least_loaded|weighted",
"parameters": {}
}
```
## Configuration
### Environment Variables
- `DATABASE_URL`: PostgreSQL connection string
- `REDIS_URL`: Redis connection string
- `JWT_SECRET_KEY`: Secret key for JWT token signing
- `API_KEY`: API key for service authentication
- `LOG_LEVEL`: Logging level (default: INFO)
- `AGENT_DISCOVERY_INTERVAL`: Interval for agent discovery (default: 30s)
- `TASK_TIMEOUT`: Task timeout in seconds (default: 300)
### Load Balancing Strategies
- **Round Robin**: Distributes tasks evenly across agents
- **Least Loaded**: Assigns tasks to the agent with lowest load
- **Weighted**: Uses agent weights for task distribution
## Troubleshooting
**Agent not discovered**: Check agent registration endpoint and network connectivity.
**Task distribution failures**: Verify load balancer configuration and agent availability.
**Authentication errors**: Ensure JWT token is valid and not expired.
**Database connection errors**: Check DATABASE_URL and database server status.
## Security Notes
- Never expose JWT_SECRET_KEY in production
- Use HTTPS in production environments
- Implement rate limiting for API endpoints
- Regularly rotate API keys and JWT secrets
- Monitor for unauthorized access attempts

View File

@@ -0,0 +1,55 @@
# Coordinator API
## Purpose & Scope
FastAPI service that accepts client compute jobs, matches miners, and tracks job lifecycle for the AITBC network.
## Marketplace Extensions
Stage 2 introduces public marketplace endpoints exposed under `/v1/marketplace`:
- `GET /v1/marketplace/offers` list available provider offers (filterable by status).
- `GET /v1/marketplace/stats` aggregated supply/demand metrics surfaced in the marketplace web dashboard.
- `POST /v1/marketplace/bids` accept bid submissions for matching (mock-friendly; returns `202 Accepted`).
These endpoints serve the `apps/marketplace-web/` dashboard via `VITE_MARKETPLACE_DATA_MODE=live`.
## Explorer Endpoints
The coordinator now exposes read-only explorer data under `/v1/explorer` for `apps/explorer-web/` live mode:
- `GET /v1/explorer/blocks` block summaries derived from recent job activity.
- `GET /v1/explorer/transactions` transaction-like records for coordinator jobs.
- `GET /v1/explorer/addresses` aggregated address activity and balances.
- `GET /v1/explorer/receipts` latest job receipts (filterable by `job_id`).
Set `VITE_DATA_MODE=live` and `VITE_COORDINATOR_API` in the explorer web app to consume these APIs.
## Development Setup
1. Create a virtual environment in `apps/coordinator-api/.venv`.
2. Install dependencies listed in `pyproject.toml` once added.
3. Run the FastAPI app via `uvicorn app.main:app --reload`.
## Configuration
Expects environment variables defined in `.env` (see `docs/bootstrap/coordinator_api.md`).
### Signed receipts (optional)
- Generate an Ed25519 key:
```bash
python - <<'PY'
from nacl.signing import SigningKey
sk = SigningKey.generate()
print(sk.encode().hex())
PY
```
- Set `RECEIPT_SIGNING_KEY_HEX` in the `.env` file to the printed hex string to enable signed receipts returned by `/v1/miners/{job_id}/result` and retrievable via `/v1/jobs/{job_id}/receipt`.
- Receipt history is available at `/v1/jobs/{job_id}/receipts` (requires client API key) and returns all stored signed payloads.
- To enable coordinator attestations, set `RECEIPT_ATTESTATION_KEY_HEX` to a separate Ed25519 private key; responses include an `attestations` array alongside the miner signature.
- Clients can verify `signature` objects using the `aitbc_crypto` package (see `protocols/receipts/spec.md`).
## Systemd
Service name: `aitbc-coordinator-api` (to be defined under `configs/systemd/`).

View File

@@ -0,0 +1,13 @@
# Cryptographic Applications
Cryptographic services and zero-knowledge circuits.
## Applications
- [ZK Circuits](zk-circuits.md) - Zero-knowledge circuits for privacy-preserving computations
## Features
- Zero-knowledge proofs
- FHE integration
- Privacy-preserving computations

View File

@@ -0,0 +1,170 @@
# AITBC ZK Circuits
Zero-knowledge circuits for privacy-preserving receipt attestation in the AITBC network.
## Overview
This project implements zk-SNARK circuits to enable privacy-preserving settlement flows while maintaining verifiability of receipts.
## Quick Start
### Prerequisites
- Node.js 16+
- npm or yarn
### Installation
```bash
cd apps/zk-circuits
npm install
```
### Compile Circuit
```bash
npm run compile
```
### Generate Trusted Setup
```bash
# Start phase 1 setup
npm run setup
# Contribute to setup (run multiple times with different participants)
npm run contribute
# Prepare phase 2
npm run prepare
# Generate proving key
npm run generate-zkey
# Contribute to zkey (optional)
npm run contribute-zkey
# Export verification key
npm run export-verification-key
```
### Generate and Verify Proof
```bash
# Generate proof
npm run generate-proof
# Verify proof
npm run verify
# Run tests
npm test
```
## Circuit Design
### Current Implementation
The initial circuit (`receipt.circom`) implements a simple hash preimage proof:
- **Public Inputs**: Receipt hash
- **Private Inputs**: Receipt data (job ID, miner ID, result, pricing)
- **Proof**: Demonstrates knowledge of receipt data without revealing it
### Future Enhancements
1. **Full Receipt Attestation**: Complete validation of receipt structure
2. **Signature Verification**: ECDSA signature validation
3. **Arithmetic Validation**: Pricing and reward calculations
4. **Range Proofs**: Confidential transaction amounts
## Development
### Circuit Structure
```
receipt.circom # Main circuit file
├── ReceiptHashPreimage # Simple hash preimage proof
├── ReceiptAttestation # Full receipt validation (WIP)
└── ECDSAVerify # Signature verification (WIP)
```
### Testing
```bash
# Run all tests
npm test
# Run specific test
npx mocha test.js
```
### Integration
The circuits integrate with:
1. **Coordinator API**: Proof generation service
2. **Settlement Layer**: On-chain verification contracts
3. **Pool Hub**: Privacy options for miners
## Security
### Trusted Setup
The Groth16 setup requires a trusted setup ceremony:
1. Multi-party participation (>100 recommended)
2. Public documentation
3. Destruction of toxic waste
### Audits
- Circuit formal verification
- Third-party security review
- Public disclosure of circuits
## Performance
| Metric | Value |
|--------|-------|
| Proof Size | ~200 bytes |
| Prover Time | 5-15 seconds |
| Verifier Time | 3ms |
| Gas Cost | ~200k |
## Troubleshooting
### Common Issues
1. **Circuit compilation fails**: Check circom version and syntax
2. **Setup fails**: Ensure sufficient disk space and memory
3. **Proof generation slow**: Consider using faster hardware or PLONK
### Debug Commands
```bash
# Check circuit constraints
circom receipt.circom --r1cs --inspect
# View witness
snarkjs wtns check witness.wtns receipt.wasm input.json
# Debug proof generation
DEBUG=snarkjs npm run generate-proof
```
## Resources
- [Circom Documentation](https://docs.circom.io/)
- [snarkjs Documentation](https://github.com/iden3/snarkjs)
- [ZK Whitepaper](https://eprint.iacr.org/2016/260)
## Contributing
1. Fork the repository
2. Create feature branch
3. Submit pull request with tests
## License
MIT

View File

@@ -0,0 +1,17 @@
# Exchange Applications
Exchange services and trading infrastructure.
## Applications
- [Exchange](exchange.md) - Cross-chain exchange and trading platform
- [Exchange Integration](exchange-integration.md) - Exchange integration services
- [Trading Engine](trading-engine.md) - Trading engine for order matching
## Features
- Cross-chain exchange
- Order matching and execution
- Price tickers
- Health monitoring
- Multi-chain support

View File

@@ -0,0 +1,174 @@
# Exchange Integration
## Status
✅ Operational
## Overview
Integration service for connecting the exchange with external systems, blockchains, and data providers.
## Architecture
### Core Components
- **Blockchain Connector**: Connects to blockchain RPC endpoints
- **Data Feed Manager**: Manages external data feeds
- **Webhook Handler**: Processes webhook notifications
- **API Client**: Client for external exchange APIs
- **Event Processor**: Processes integration events
## Quick Start (End Users)
### Prerequisites
- Python 3.13+
- Access to blockchain RPC endpoints
- API keys for external exchanges
### Installation
```bash
cd /opt/aitbc/apps/exchange-integration
.venv/bin/pip install -r requirements.txt
```
### Configuration
Set environment variables in `.env`:
```bash
BLOCKCHAIN_RPC_URL=http://localhost:8006
EXTERNAL_EXCHANGE_API_KEY=your-api-key
WEBHOOK_SECRET=your-webhook-secret
```
### Running the Service
```bash
.venv/bin/python main.py
```
## Developer Guide
### Development Setup
1. Clone the repository
2. Create virtual environment: `python -m venv .venv`
3. Install dependencies: `pip install -r requirements.txt`
4. Configure environment variables
5. Run tests: `pytest tests/`
### Project Structure
```
exchange-integration/
├── src/
│ ├── blockchain_connector/ # Blockchain integration
│ ├── data_feed_manager/ # Data feed management
│ ├── webhook_handler/ # Webhook processing
│ ├── api_client/ # External API client
│ └── event_processor/ # Event processing
├── tests/ # Test suite
└── pyproject.toml # Project configuration
```
### Testing
```bash
# Run all tests
pytest tests/
# Run blockchain integration tests
pytest tests/test_blockchain.py
# Run webhook tests
pytest tests/test_webhook.py
```
## API Reference
### Blockchain Integration
#### Get Blockchain Status
```http
GET /api/v1/integration/blockchain/status
```
#### Sync Blockchain Data
```http
POST /api/v1/integration/blockchain/sync
Content-Type: application/json
{
"chain_id": "ait-mainnet",
"from_height": 1000,
"to_height": 2000
}
```
### Data Feeds
#### Subscribe to Data Feed
```http
POST /api/v1/integration/feeds/subscribe
Content-Type: application/json
{
"feed_type": "price|volume|orders",
"symbols": ["BTC_AIT", "ETH_AIT"]
}
```
#### Get Feed Data
```http
GET /api/v1/integration/feeds/{feed_id}/data
```
### Webhooks
#### Register Webhook
```http
POST /api/v1/integration/webhooks
Content-Type: application/json
{
"url": "https://example.com/webhook",
"events": ["order_filled", "price_update"],
"secret": "your-secret"
}
```
#### Process Webhook
```http
POST /api/v1/integration/webhooks/process
Content-Type: application/json
X-Webhook-Secret: your-secret
{
"event": "order_filled",
"data": {}
}
```
## Configuration
### Environment Variables
- `BLOCKCHAIN_RPC_URL`: Blockchain RPC endpoint
- `EXTERNAL_EXCHANGE_API_KEY`: API key for external exchanges
- `WEBHOOK_SECRET`: Secret for webhook validation
- `SYNC_INTERVAL`: Interval for blockchain sync (default: 60s)
- `MAX_RETRIES`: Maximum retry attempts for failed requests
- `TIMEOUT`: Request timeout in seconds
### Integration Settings
- **Supported Chains**: List of supported blockchain networks
- **Data Feed Providers**: External data feed providers
- **Webhook Endpoints**: Configurable webhook endpoints
## Troubleshooting
**Blockchain sync failed**: Check RPC endpoint connectivity and authentication.
**Data feed not updating**: Verify API key and data feed configuration.
**Webhook not triggered**: Check webhook URL and secret configuration.
**API rate limiting**: Implement retry logic with exponential backoff.
## Security Notes
- Validate webhook signatures
- Use HTTPS for all external connections
- Rotate API keys regularly
- Implement rate limiting for external API calls
- Monitor for suspicious activity

View File

@@ -0,0 +1,221 @@
# Exchange
## Status
✅ Operational
## Overview
Cross-chain exchange and trading platform supporting multiple blockchain networks with real-time price tracking and order matching.
## Architecture
### Core Components
- **Order Book**: Central order book for all trading pairs
- **Matching Engine**: Real-time order matching and execution
- **Price Ticker**: Real-time price updates and market data
- **Cross-Chain Bridge**: Bridge for cross-chain asset transfers
- **Health Monitor**: System health monitoring and alerting
- **API Server**: RESTful API for exchange operations
### Supported Features
- Multiple trading pairs
- Cross-chain asset transfers
- Real-time price updates
- Order management (limit, market, stop orders)
- Health monitoring
- Multi-chain support
## Quick Start (End Users)
### Prerequisites
- Python 3.13+
- PostgreSQL database
- Redis for caching
- Access to blockchain RPC endpoints
### Installation
```bash
cd /opt/aitbc/apps/exchange
.venv/bin/pip install -r requirements.txt
```
### Configuration
Set environment variables in `.env`:
```bash
DATABASE_URL=postgresql://user:pass@localhost/exchange
REDIS_URL=redis://localhost:6379
BLOCKCHAIN_RPC_URL=http://localhost:8006
CROSS_CHAIN_ENABLED=true
```
### Running the Service
```bash
# Start the exchange server
python server.py
# Or use the production launcher
bash deploy_real_exchange.sh
```
### Web Interface
Open `index.html` in a browser to access the web interface.
## Developer Guide
### Development Setup
1. Clone the repository
2. Create virtual environment: `python -m venv .venv`
3. Install dependencies: `pip install -r requirements.txt`
4. Set up database: See database.py
5. Configure environment variables
6. Run tests: `pytest tests/`
### Project Structure
```
exchange/
├── server.py # Main server
├── exchange_api.py # Exchange API endpoints
├── multichain_exchange_api.py # Multi-chain API
├── simple_exchange_api.py # Simple exchange API
├── cross_chain_exchange.py # Cross-chain exchange logic
├── real_exchange_integration.py # Real exchange integration
├── models.py # Database models
├── database.py # Database connection
├── health_monitor.py # Health monitoring
├── index.html # Web interface
├── styles.css # Web styles
├── update_price_ticker.js # Price ticker update script
└── scripts/ # Utility scripts
```
### Testing
```bash
# Run all tests
pytest tests/
# Run API tests
pytest tests/test_api.py
# Run cross-chain tests
pytest tests/test_cross_chain.py
```
## API Reference
### Market Data
#### Get Order Book
```http
GET /api/v1/orderbook?pair=BTC_AIT
```
#### Get Price Ticker
```http
GET /api/v1/ticker?pair=BTC_AIT
```
#### Get Market Summary
```http
GET /api/v1/market/summary
```
### Orders
#### Place Order
```http
POST /api/v1/orders
Content-Type: application/json
{
"pair": "BTC_AIT",
"side": "buy|sell",
"type": "limit|market|stop",
"amount": 100,
"price": 1.0,
"user_id": "string"
}
```
#### Get Order Status
```http
GET /api/v1/orders/{order_id}
```
#### Cancel Order
```http
DELETE /api/v1/orders/{order_id}
```
#### Get User Orders
```http
GET /api/v1/orders?user_id=string&status=open
```
### Cross-Chain
#### Initiate Cross-Chain Transfer
```http
POST /api/v1/crosschain/transfer
Content-Type: application/json
{
"from_chain": "ait-mainnet",
"to_chain": "btc-mainnet",
"asset": "BTC",
"amount": 100,
"recipient": "address"
}
```
#### Get Transfer Status
```http
GET /api/v1/crosschain/transfers/{transfer_id}
```
### Health
#### Get Health Status
```http
GET /health
```
#### Get System Metrics
```http
GET /metrics
```
## Configuration
### Environment Variables
- `DATABASE_URL`: PostgreSQL connection string
- `REDIS_URL`: Redis connection string
- `BLOCKCHAIN_RPC_URL`: Blockchain RPC endpoint
- `CROSS_CHAIN_ENABLED`: Enable cross-chain transfers
- `MAX_ORDER_SIZE`: Maximum order size
- `MIN_ORDER_SIZE`: Minimum order size
- `TRADING_FEE_PERCENTAGE`: Trading fee percentage
- `ORDER_TIMEOUT`: Order timeout in seconds
### Trading Parameters
- **Order Types**: limit, market, stop orders
- **Order Sides**: buy, sell
- **Trading Pairs**: Configurable trading pairs
- **Fee Structure**: Percentage-based trading fees
## Troubleshooting
**Order not matched**: Check order book depth and price settings.
**Cross-chain transfer failed**: Verify blockchain connectivity and bridge status.
**Price ticker not updating**: Check WebSocket connection and data feed.
**Database connection errors**: Verify DATABASE_URL and database server status.
## Security Notes
- Use API keys for authentication
- Implement rate limiting for API endpoints
- Validate all order parameters
- Encrypt sensitive data at rest
- Monitor for suspicious trading patterns
- Regularly audit order history

View File

@@ -0,0 +1,199 @@
# Trading Engine
## Status
✅ Operational
## Overview
High-performance trading engine for order matching, execution, and trade settlement with support for multiple order types and trading strategies.
## Architecture
### Core Components
- **Order Matching Engine**: Real-time order matching algorithm
- **Trade Executor**: Executes matched trades
- **Risk Manager**: Risk assessment and position management
- **Settlement Engine**: Trade settlement and clearing
- **Order Book Manager**: Manages order book state
- **Price Engine**: Calculates fair market prices
## Quick Start (End Users)
### Prerequisites
- Python 3.13+
- PostgreSQL database
- Redis for caching
- Access to exchange APIs
### Installation
```bash
cd /opt/aitbc/apps/trading-engine
.venv/bin/pip install -r requirements.txt
```
### Configuration
Set environment variables in `.env`:
```bash
DATABASE_URL=postgresql://user:pass@localhost/trading
REDIS_URL=redis://localhost:6379
EXCHANGE_API_KEY=your-api-key
RISK_LIMITS_ENABLED=true
```
### Running the Service
```bash
.venv/bin/python main.py
```
## Developer Guide
### Development Setup
1. Clone the repository
2. Create virtual environment: `python -m venv .venv`
3. Install dependencies: `pip install -r requirements.txt`
4. Set up database
5. Configure environment variables
6. Run tests: `pytest tests/`
### Project Structure
```
trading-engine/
├── src/
│ ├── matching_engine/ # Order matching logic
│ ├── trade_executor/ # Trade execution
│ ├── risk_manager/ # Risk management
│ ├── settlement_engine/ # Trade settlement
│ ├── order_book/ # Order book management
│ └── price_engine/ # Price calculation
├── tests/ # Test suite
└── pyproject.toml # Project configuration
```
### Testing
```bash
# Run all tests
pytest tests/
# Run matching engine tests
pytest tests/test_matching.py
# Run risk manager tests
pytest tests/test_risk.py
```
## API Reference
### Order Management
#### Submit Order
```http
POST /api/v1/trading/orders
Content-Type: application/json
{
"user_id": "string",
"symbol": "BTC_AIT",
"side": "buy|sell",
"type": "limit|market|stop",
"quantity": 100,
"price": 1.0,
"stop_price": 1.1
}
```
#### Cancel Order
```http
DELETE /api/v1/trading/orders/{order_id}
```
#### Get Order Status
```http
GET /api/v1/trading/orders/{order_id}
```
### Trade Execution
#### Get Trade History
```http
GET /api/v1/trading/trades?symbol=BTC_AIT&limit=100
```
#### Get User Trades
```http
GET /api/v1/trading/users/{user_id}/trades
```
### Risk Management
#### Check Risk Limits
```http
POST /api/v1/trading/risk/check
Content-Type: application/json
{
"user_id": "string",
"order": {}
}
```
#### Get User Risk Profile
```http
GET /api/v1/trading/users/{user_id}/risk-profile
```
### Settlement
#### Get Settlement Status
```http
GET /api/v1/trading/settlement/{trade_id}
```
#### Trigger Settlement
```http
POST /api/v1/trading/settlement/trigger
Content-Type: application/json
{
"trade_ids": ["trade1", "trade2"]
}
```
## Configuration
### Environment Variables
- `DATABASE_URL`: PostgreSQL connection string
- `REDIS_URL`: Redis connection string
- `EXCHANGE_API_KEY`: Exchange API key
- `RISK_LIMITS_ENABLED`: Enable risk management
- `MAX_POSITION_SIZE`: Maximum position size
- `MARGIN_REQUIREMENT`: Margin requirement percentage
- `LIQUIDATION_THRESHOLD`: Liquidation threshold
### Order Types
- **Limit Order**: Execute at specified price or better
- **Market Order**: Execute immediately at market price
- **Stop Order**: Trigger when price reaches stop price
- **Stop-Limit**: Limit order triggered by stop price
### Risk Parameters
- **Position Limits**: Maximum position sizes
- **Margin Requirements**: Required margin for leverage
- **Liquidation Threshold**: Price at which positions are liquidated
## Troubleshooting
**Order not matched**: Check order book depth and price settings.
**Trade execution failed**: Verify exchange connectivity and balance.
**Risk check failed**: Review user risk profile and position limits.
**Settlement delayed**: Check blockchain network status and gas fees.
## Security Notes
- Implement order validation
- Use rate limiting for order submission
- Monitor for wash trading
- Validate user authentication
- Implement position limits
- Regularly audit trade history

View File

@@ -0,0 +1,13 @@
# Explorer Applications
Blockchain explorer and analytics services.
## Applications
- [Simple Explorer](simple-explorer.md) - Simple blockchain explorer
## Features
- Block exploration
- Transaction search
- Address tracking

View File

@@ -0,0 +1,207 @@
# Simple Explorer
## Status
✅ Operational
## Overview
Simple blockchain explorer for viewing blocks, transactions, and addresses on the AITBC blockchain.
## Architecture
### Core Components
- **Block Viewer**: Displays block information and details
- **Transaction Viewer**: Displays transaction information
- **Address Viewer**: Displays address details and transaction history
- **Search Engine**: Searches for blocks, transactions, and addresses
- **Data Fetcher**: Fetches data from blockchain RPC
## Quick Start (End Users)
### Prerequisites
- Python 3.13+
- Access to blockchain RPC endpoint
- Web browser
### Installation
```bash
cd /opt/aitbc/apps/simple-explorer
.venv/bin/pip install -r requirements.txt
```
### Configuration
Set environment variables in `.env`:
```bash
BLOCKCHAIN_RPC_URL=http://localhost:8006
CHAIN_ID=ait-mainnet
EXPLORER_PORT=8080
```
### Running the Service
```bash
.venv/bin/python main.py
```
### Access Explorer
Open `http://localhost:8080` in a web browser to access the explorer.
## Developer Guide
### Development Setup
1. Clone the repository
2. Create virtual environment: `python -m venv .venv`
3. Install dependencies: `pip install -r requirements.txt`
4. Configure blockchain RPC endpoint
5. Run tests: `pytest tests/`
### Project Structure
```
simple-explorer/
├── src/
│ ├── block_viewer/ # Block viewing
│ ├── transaction_viewer/ # Transaction viewing
│ ├── address_viewer/ # Address viewing
│ ├── search_engine/ # Search functionality
│ └── data_fetcher/ # Data fetching
├── templates/ # HTML templates
├── static/ # Static assets
├── tests/ # Test suite
└── pyproject.toml # Project configuration
```
### Testing
```bash
# Run all tests
pytest tests/
# Run data fetcher tests
pytest tests/test_fetcher.py
# Run search engine tests
pytest tests/test_search.py
```
## API Reference
### Block Viewing
#### Get Block by Height
```http
GET /api/v1/explorer/block/{height}
```
#### Get Block by Hash
```http
GET /api/v1/explorer/block/hash/{hash}
```
#### Get Latest Blocks
```http
GET /api/v1/explorer/blocks/latest?limit=20
```
### Transaction Viewing
#### Get Transaction by Hash
```http
GET /api/v1/explorer/transaction/{hash}
```
#### Get Transactions by Address
```http
GET /api/v1/explorer/transactions/address/{address}?limit=50
```
#### Get Latest Transactions
```http
GET /api/v1/explorer/transactions/latest?limit=50
```
### Address Viewing
#### Get Address Details
```http
GET /api/v1/explorer/address/{address}
```
#### Get Address Balance
```http
GET /api/v1/explorer/address/{address}/balance
```
#### Get Address Transactions
```http
GET /api/v1/explorer/address/{address}/transactions?limit=50
```
### Search
#### Search
```http
GET /api/v1/explorer/search?q={query}
```
#### Search Blocks
```http
GET /api/v1/explorer/search/blocks?q={query}
```
#### Search Transactions
```http
GET /api/v1/explorer/search/transactions?q={query}
```
#### Search Addresses
```http
GET /api/v1/explorer/search/addresses?q={query}
```
### Statistics
#### Get Chain Statistics
```http
GET /api/v1/explorer/stats
```
#### Get Network Status
```http
GET /api/v1/explorer/network/status
```
## Configuration
### Environment Variables
- `BLOCKCHAIN_RPC_URL`: Blockchain RPC endpoint
- `CHAIN_ID`: Blockchain chain ID
- `EXPLORER_PORT`: Explorer web server port
- `CACHE_ENABLED`: Enable data caching
- `CACHE_TTL`: Cache time-to-live in seconds
### Display Parameters
- **Blocks Per Page**: Number of blocks per page (default: 20)
- **Transactions Per Page**: Number of transactions per page (default: 50)
- **Address History Limit**: Transaction history limit per address
### Caching
- **Block Cache**: Cache block data
- **Transaction Cache**: Cache transaction data
- **Address Cache**: Cache address data
- **Cache TTL**: Time-to-live for cached data
## Troubleshooting
**Explorer not loading**: Check blockchain RPC connectivity and explorer port.
**Data not updating**: Verify cache configuration and RPC endpoint availability.
**Search not working**: Check search index and data availability.
**Address history incomplete**: Verify blockchain sync status and data availability.
## Security Notes
- Use HTTPS in production
- Implement rate limiting for API endpoints
- Sanitize user inputs
- Cache sensitive data appropriately
- Monitor for abuse
- Regularly update dependencies

View File

@@ -0,0 +1,13 @@
# Global AI Applications
Global AI agent services.
## Applications
- [Global AI Agents](global-ai-agents.md) - Global AI agent coordination
## Features
- Cross-region AI coordination
- Distributed AI operations
- Global agent discovery

View File

@@ -0,0 +1,220 @@
# Global AI Agents
## Status
✅ Operational
## Overview
Global AI agent coordination service for managing distributed AI agents across multiple regions and networks.
## Architecture
### Core Components
- **Agent Discovery**: Discovers AI agents across the global network
- **Coordination Engine**: Coordinates agent activities and decisions
- **Communication Bridge**: Bridges communication between regional agent clusters
- **Load Distributor**: Distributes AI workloads across regions
- **State Synchronizer**: Synchronizes agent state across regions
## Quick Start (End Users)
### Prerequisites
- Python 3.13+
- Access to regional agent clusters
- Network connectivity between regions
### Installation
```bash
cd /opt/aitbc/apps/global-ai-agents
.venv/bin/pip install -r requirements.txt
```
### Configuration
Set environment variables in `.env`:
```bash
REGIONAL_CLUSTERS=us-east:https://us.example.com,eu-west:https://eu.example.com
COORDINATION_INTERVAL=30
STATE_SYNC_ENABLED=true
```
### Running the Service
```bash
.venv/bin/python main.py
```
## Developer Guide
### Development Setup
1. Clone the repository
2. Create virtual environment: `python -m venv .venv`
3. Install dependencies: `pip install -r requirements.txt`
4. Configure regional cluster endpoints
5. Run tests: `pytest tests/`
### Project Structure
```
global-ai-agents/
├── src/
│ ├── agent_discovery/ # Agent discovery
│ ├── coordination_engine/ # Coordination logic
│ ├── communication_bridge/ # Regional communication
│ ├── load_distributor/ # Workload distribution
│ └── state_synchronizer/ # State synchronization
├── tests/ # Test suite
└── pyproject.toml # Project configuration
```
### Testing
```bash
# Run all tests
pytest tests/
# Run coordination engine tests
pytest tests/test_coordination.py
# Run state synchronizer tests
pytest tests/test_sync.py
```
## API Reference
### Agent Discovery
#### Discover Agents
```http
POST /api/v1/global-ai/discover
Content-Type: application/json
{
"region": "us-east",
"agent_type": "string"
}
```
#### Get Agent Registry
```http
GET /api/v1/global-ai/agents?region=us-east
```
#### Register Agent
```http
POST /api/v1/global-ai/agents/register
Content-Type: application/json
{
"agent_id": "string",
"region": "us-east",
"capabilities": ["string"]
}
```
### Coordination
#### Coordinate Task
```http
POST /api/v1/global-ai/coordinate
Content-Type: application/json
{
"task_id": "string",
"task_type": "string",
"requirements": {},
"regions": ["us-east", "eu-west"]
}
```
#### Get Coordination Status
```http
GET /api/v1/global-ai/coordination/{task_id}
```
### Communication
#### Send Message
```http
POST /api/v1/global-ai/communication/send
Content-Type: application/json
{
"from_region": "us-east",
"to_region": "eu-west",
"message": {}
}
```
#### Get Communication Log
```http
GET /api/v1/global-ai/communication/log?limit=100
```
### Load Distribution
#### Distribute Workload
```http
POST /api/v1/global-ai/distribute
Content-Type: application/json
{
"workload": {},
"strategy": "round_robin|least_loaded"
}
```
#### Get Load Status
```http
GET /api/v1/global-ai/load/status
```
### State Synchronization
#### Sync State
```http
POST /api/v1/global-ai/sync/trigger
Content-Type: application/json
{
"state_type": "string",
"regions": ["us-east", "eu-west"]
}
```
#### Get Sync Status
```http
GET /api/v1/global-ai/sync/status
```
## Configuration
### Environment Variables
- `REGIONAL_CLUSTERS`: Comma-separated regional cluster endpoints
- `COORDINATION_INTERVAL`: Coordination check interval (default: 30s)
- `STATE_SYNC_ENABLED`: Enable state synchronization
- `MAX_LATENCY`: Maximum acceptable latency between regions
### Coordination Strategies
- **Round Robin**: Distribute tasks evenly across regions
- **Least Loaded**: Route to region with lowest load
- **Proximity**: Route to nearest region based on latency
### Synchronization Parameters
- **Sync Interval**: Frequency of state synchronization
- **Conflict Resolution**: Strategy for resolving state conflicts
- **Compression**: Enable state compression for transfers
## Troubleshooting
**Agent not discovered**: Check regional cluster connectivity and agent registration.
**Coordination failed**: Verify agent availability and task requirements.
**Communication bridge down**: Check network connectivity between regions.
**State sync delayed**: Review sync interval and network bandwidth.
## Security Notes
- Use TLS for all inter-region communication
- Implement authentication for regional clusters
- Encrypt state data during synchronization
- Monitor for unauthorized agent registration
- Implement rate limiting for coordination requests
- Regularly audit agent registry

View File

@@ -0,0 +1,16 @@
# Infrastructure Applications
Monitoring, load balancing, and infrastructure services.
## Applications
- [Monitor](monitor.md) - System monitoring and alerting
- [Multi-Region Load Balancer](multi-region-load-balancer.md) - Load balancing across regions
- [Global Infrastructure](global-infrastructure.md) - Global infrastructure management
## Features
- System monitoring
- Health checks
- Load balancing
- Multi-region support

View File

@@ -0,0 +1,206 @@
# Global Infrastructure
## Status
✅ Operational
## Overview
Global infrastructure management service for deploying, monitoring, and managing AITBC infrastructure across multiple regions and cloud providers.
## Architecture
### Core Components
- **Infrastructure Manager**: Manages infrastructure resources
- **Deployment Service**: Handles deployments across regions
- **Resource Scheduler**: Schedules resources optimally
- **Configuration Manager**: Manages infrastructure configuration
- **Cost Optimizer**: Optimizes infrastructure costs
## Quick Start (End Users)
### Prerequisites
- Python 3.13+
- Cloud provider credentials (AWS, GCP, Azure)
- Terraform or CloudFormation templates
### Installation
```bash
cd /opt/aitbc/apps/global-infrastructure
.venv/bin/pip install -r requirements.txt
```
### Configuration
Set environment variables in `.env`:
```bash
CLOUD_PROVIDER=aws
AWS_ACCESS_KEY_ID=your-access-key
AWS_SECRET_ACCESS_KEY=your-secret-key
AWS_REGION=us-east-1
TERRAFORM_PATH=/path/to/terraform
```
### Running the Service
```bash
.venv/bin/python main.py
```
## Developer Guide
### Development Setup
1. Clone the repository
2. Create virtual environment: `python -m venv .venv`
3. Install dependencies: `pip install -r requirements.txt`
4. Configure cloud provider credentials
5. Run tests: `pytest tests/`
### Project Structure
```
global-infrastructure/
├── src/
│ ├── infrastructure_manager/ # Infrastructure management
│ ├── deployment_service/ # Deployment orchestration
│ ├── resource_scheduler/ # Resource scheduling
│ ├── config_manager/ # Configuration management
│ └── cost_optimizer/ # Cost optimization
├── terraform/ # Terraform templates
├── tests/ # Test suite
└── pyproject.toml # Project configuration
```
### Testing
```bash
# Run all tests
pytest tests/
# Run deployment tests
pytest tests/test_deployment.py
# Run cost optimizer tests
pytest tests/test_cost.py
```
## API Reference
### Infrastructure Management
#### Get Infrastructure Status
```http
GET /api/v1/infrastructure/status
```
#### Provision Resource
```http
POST /api/v1/infrastructure/provision
Content-Type: application/json
{
"resource_type": "server|database|storage",
"region": "us-east-1",
"specifications": {}
}
```
#### Decommission Resource
```http
DELETE /api/v1/infrastructure/resources/{resource_id}
```
### Deployment
#### Deploy Service
```http
POST /api/v1/infrastructure/deploy
Content-Type: application/json
{
"service_name": "blockchain-node",
"region": "us-east-1",
"configuration": {}
}
```
#### Get Deployment Status
```http
GET /api/v1/infrastructure/deployments/{deployment_id}
```
### Resource Scheduling
#### Get Resource Utilization
```http
GET /api/v1/infrastructure/resources/utilization
```
#### Optimize Resources
```http
POST /api/v1/infrastructure/resources/optimize
Content-Type: application/json
{
"optimization_type": "cost|performance",
"constraints": {}
}
```
### Configuration
#### Get Configuration
```http
GET /api/v1/infrastructure/config/{region}
```
#### Update Configuration
```http
PUT /api/v1/infrastructure/config/{region}
Content-Type: application/json
{
"parameters": {}
}
```
### Cost Management
#### Get Cost Report
```http
GET /api/v1/infrastructure/costs?period=month
```
#### Get Cost Optimization Recommendations
```http
GET /api/v1/infrastructure/costs/recommendations
```
## Configuration
### Environment Variables
- `CLOUD_PROVIDER`: Cloud provider (aws, gcp, azure)
- `AWS_ACCESS_KEY_ID`: AWS access key
- `AWS_SECRET_ACCESS_KEY`: AWS secret key
- `AWS_REGION`: Default AWS region
- `TERRAFORM_PATH`: Path to Terraform templates
- `DEPLOYMENT_TIMEOUT`: Deployment timeout in seconds
### Infrastructure Parameters
- **Regions**: Supported cloud regions
- **Instance Types**: Available instance types
- **Storage Classes**: Storage class configurations
- **Network Configurations**: VPC and network settings
## Troubleshooting
**Deployment failed**: Check cloud provider credentials and configuration.
**Resource not provisioned**: Verify resource specifications and quotas.
**Cost optimization not working**: Review cost optimizer configuration and constraints.
**Configuration sync failed**: Check network connectivity and configuration validity.
## Security Notes
- Rotate cloud provider credentials regularly
- Use IAM roles instead of access keys when possible
- Enable encryption for all storage resources
- Implement network security groups and firewalls
- Monitor for unauthorized resource changes
- Regularly audit infrastructure configuration

View File

@@ -0,0 +1,213 @@
# Monitor
## Status
✅ Operational
## Overview
System monitoring and alerting service for tracking application health, performance metrics, and generating alerts for critical events.
## Architecture
### Core Components
- **Health Check Service**: Periodic health checks for all services
- **Metrics Collector**: Collects performance metrics from applications
- **Alert Manager**: Manages alert rules and notifications
- **Dashboard**: Web dashboard for monitoring visualization
- **Log Aggregator**: Aggregates logs from all services
- **Notification Service**: Sends alerts via email, Slack, etc.
## Quick Start (End Users)
### Prerequisites
- Python 3.13+
- Access to application endpoints
- Notification service credentials (email, Slack webhook)
### Installation
```bash
cd /opt/aitbc/apps/monitor
.venv/bin/pip install -r requirements.txt
```
### Configuration
Set environment variables in `.env`:
```bash
MONITOR_INTERVAL=60
ALERT_EMAIL=admin@example.com
SLACK_WEBHOOK=https://hooks.slack.com/services/...
PROMETHEUS_URL=http://localhost:9090
```
### Running the Service
```bash
.venv/bin/python main.py
```
### Access Dashboard
Open `http://localhost:8080` in a browser to access the monitoring dashboard.
## Developer Guide
### Development Setup
1. Clone the repository
2. Create virtual environment: `python -m venv .venv`
3. Install dependencies: `pip install -r requirements.txt`
4. Configure monitoring targets
5. Run tests: `pytest tests/`
### Project Structure
```
monitor/
├── src/
│ ├── health_check/ # Health check service
│ ├── metrics_collector/ # Metrics collection
│ ├── alert_manager/ # Alert management
│ ├── dashboard/ # Web dashboard
│ ├── log_aggregator/ # Log aggregation
│ └── notification/ # Notification service
├── tests/ # Test suite
└── pyproject.toml # Project configuration
```
### Testing
```bash
# Run all tests
pytest tests/
# Run health check tests
pytest tests/test_health_check.py
# Run alert manager tests
pytest tests/test_alerts.py
```
## API Reference
### Health Checks
#### Run Health Check
```http
GET /api/v1/monitor/health/{service_name}
```
#### Get All Health Status
```http
GET /api/v1/monitor/health
```
#### Add Health Check Target
```http
POST /api/v1/monitor/health/targets
Content-Type: application/json
{
"service_name": "string",
"endpoint": "http://localhost:8000/health",
"interval": 60,
"timeout": 10
}
```
### Metrics
#### Get Metrics
```http
GET /api/v1/monitor/metrics?service=blockchain-node
```
#### Query Prometheus
```http
POST /api/v1/monitor/metrics/query
Content-Type: application/json
{
"query": "up{job=\"blockchain-node\"}",
"range": "1h"
}
```
### Alerts
#### Create Alert Rule
```http
POST /api/v1/monitor/alerts/rules
Content-Type: application/json
{
"name": "high_cpu_usage",
"condition": "cpu_usage > 80",
"duration": 300,
"severity": "warning|critical",
"notification": "email|slack"
}
```
#### Get Active Alerts
```http
GET /api/v1/monitor/alerts/active
```
#### Acknowledge Alert
```http
POST /api/v1/monitor/alerts/{alert_id}/acknowledge
```
### Logs
#### Query Logs
```http
POST /api/v1/monitor/logs/query
Content-Type: application/json
{
"service": "blockchain-node",
"level": "ERROR",
"time_range": "1h",
"query": "error"
}
```
#### Get Log Statistics
```http
GET /api/v1/monitor/logs/stats?service=blockchain-node
```
## Configuration
### Environment Variables
- `MONITOR_INTERVAL`: Interval for health checks (default: 60s)
- `ALERT_EMAIL`: Email address for alert notifications
- `SLACK_WEBHOOK`: Slack webhook for notifications
- `PROMETHEUS_URL`: Prometheus server URL
- `LOG_RETENTION_DAYS`: Log retention period (default: 30 days)
- `ALERT_COOLDOWN`: Alert cooldown period (default: 300s)
### Monitoring Targets
- **Services**: List of services to monitor
- **Endpoints**: Health check endpoints for each service
- **Intervals**: Check intervals for each service
### Alert Rules
- **CPU Usage**: Alert when CPU usage exceeds threshold
- **Memory Usage**: Alert when memory usage exceeds threshold
- **Disk Usage**: Alert when disk usage exceeds threshold
- **Service Down**: Alert when service is unresponsive
## Troubleshooting
**Health check failing**: Verify service endpoint and network connectivity.
**Alerts not triggering**: Check alert rule configuration and notification settings.
**Metrics not collecting**: Verify Prometheus integration and service metrics endpoints.
**Logs not appearing**: Check log aggregation configuration and service log paths.
## Security Notes
- Secure access to monitoring dashboard
- Use authentication for API endpoints
- Encrypt alert notification credentials
- Implement role-based access control
- Regularly review alert rules
- Monitor for unauthorized access attempts

View File

@@ -0,0 +1,193 @@
# Multi-Region Load Balancer
## Status
✅ Operational
## Overview
Load balancing service for distributing traffic across multiple regions and ensuring high availability and optimal performance.
## Architecture
### Core Components
- **Load Balancer**: Distributes traffic across regions
- **Health Checker**: Monitors regional health status
- **Traffic Router**: Routes traffic based on load and latency
- **Failover Manager**: Handles failover between regions
- **Configuration Manager**: Manages load balancing rules
## Quick Start (End Users)
### Prerequisites
- Python 3.13+
- Multiple regional endpoints
- DNS configuration for load balancing
### Installation
```bash
cd /opt/aitbc/apps/multi-region-load-balancer
.venv/bin/pip install -r requirements.txt
```
### Configuration
Set environment variables in `.env`:
```bash
REGIONAL_ENDPOINTS=us-east:https://us.example.com,eu-west:https://eu.example.com
LOAD_BALANCING_STRATEGY=round_robin|least_latency|weighted
HEALTH_CHECK_INTERVAL=30
FAILOVER_ENABLED=true
```
### Running the Service
```bash
.venv/bin/python main.py
```
## Developer Guide
### Development Setup
1. Clone the repository
2. Create virtual environment: `python -m venv .venv`
3. Install dependencies: `pip install -r requirements.txt`
4. Configure regional endpoints
5. Run tests: `pytest tests/`
### Project Structure
```
multi-region-load-balancer/
├── src/
│ ├── load_balancer/ # Load balancing logic
│ ├── health_checker/ # Regional health monitoring
│ ├── traffic_router/ # Traffic routing
│ ├── failover_manager/ # Failover management
│ └── config_manager/ # Configuration management
├── tests/ # Test suite
└── pyproject.toml # Project configuration
```
### Testing
```bash
# Run all tests
pytest tests/
# Run load balancer tests
pytest tests/test_load_balancer.py
# Run failover tests
pytest tests/test_failover.py
```
## API Reference
### Load Balancing
#### Get Load Balancer Status
```http
GET /api/v1/lb/status
```
#### Configure Load Balancing Strategy
```http
PUT /api/v1/lb/strategy
Content-Type: application/json
{
"strategy": "round_robin|least_latency|weighted",
"parameters": {}
}
```
#### Get Regional Status
```http
GET /api/v1/lb/regions
```
### Health Checks
#### Run Health Check
```http
POST /api/v1/lb/health/check
Content-Type: application/json
{
"region": "us-east"
}
```
#### Get Health History
```http
GET /api/v1/lb/health/history?region=us-east
```
### Failover
#### Trigger Manual Failover
```http
POST /api/v1/lb/failover/trigger
Content-Type: application/json
{
"from_region": "us-east",
"to_region": "eu-west"
}
```
#### Get Failover Status
```http
GET /api/v1/lb/failover/status
```
### Configuration
#### Add Regional Endpoint
```http
POST /api/v1/lb/regions
Content-Type: application/json
{
"region": "us-west",
"endpoint": "https://us-west.example.com",
"weight": 1.0
}
```
#### Remove Regional Endpoint
```http
DELETE /api/v1/lb/regions/{region}
```
## Configuration
### Environment Variables
- `REGIONAL_ENDPOINTS`: Comma-separated regional endpoints
- `LOAD_BALANCING_STRATEGY`: Strategy for load distribution
- `HEALTH_CHECK_INTERVAL`: Interval for health checks (default: 30s)
- `FAILOVER_ENABLED`: Enable automatic failover
- `FAILOVER_THRESHOLD`: Threshold for triggering failover
### Load Balancing Strategies
- **Round Robin**: Distributes traffic evenly across regions
- **Least Latency**: Routes to region with lowest latency
- **Weighted**: Uses configured weights for distribution
### Health Check Parameters
- **Check Interval**: Frequency of health checks
- **Timeout**: Timeout for health check responses
- **Failure Threshold**: Number of failures before marking region down
## Troubleshooting
**Load balancing not working**: Verify regional endpoints and strategy configuration.
**Failover not triggering**: Check health check configuration and thresholds.
**High latency**: Review regional health and network connectivity.
**Uneven distribution**: Check weights and load balancing strategy.
## Security Notes
- Use TLS for all regional connections
- Implement authentication for load balancer API
- Monitor for DDoS attacks
- Regularly review regional access
- Implement rate limiting

View File

@@ -0,0 +1,15 @@
# Marketplace Applications
GPU marketplace and pool hub services.
## Applications
- [Marketplace](marketplace.md) - GPU marketplace for compute resources
- [Pool Hub](pool-hub.md) - Pool hub for resource pooling
## Features
- GPU resource marketplace
- Provider offers and bids
- Pool management
- Multi-chain support

View File

@@ -0,0 +1,41 @@
# Marketplace Web
Mock UI for exploring marketplace offers and submitting bids.
## Development
```bash
npm install
npm run dev
```
The dev server listens on `http://localhost:5173/` by default. Adjust via `--host`/`--port` flags in the `systemd` unit or `package.json` script.
## Data Modes
Marketplace web reuses the explorer pattern of mock vs. live data:
- Set `VITE_MARKETPLACE_DATA_MODE=mock` (default) to consume JSON fixtures under `public/mock/`.
- Set `VITE_MARKETPLACE_DATA_MODE=live` and point `VITE_MARKETPLACE_API` to the coordinator backend when integration-ready.
### Feature Flags & Auth
- `VITE_MARKETPLACE_ENABLE_BIDS` (default `true`) gates whether the bid form submits to the backend. Set to `false` to keep the UI read-only during phased rollouts.
- `VITE_MARKETPLACE_REQUIRE_AUTH` (default `false`) enforces a bearer token session before live bid submissions. Tokens are stored in `localStorage` by `src/lib/auth.ts`; the API helpers automatically attach the `Authorization` header when a session is present.
- Session JSON is expected to include `token` (string) and `expiresAt` (epoch ms). Expired or malformed entries are cleared automatically.
Document any backend expectations (e.g., coordinator accepting bearer tokens) alongside the environment variables in deployment manifests.
## Structure
- `public/mock/offers.json` sample marketplace offers.
- `public/mock/stats.json` summary dashboard statistics.
- `src/lib/api.ts` data-mode-aware fetch helpers.
- `src/main.ts` renders dashboard, offers table, and bid form.
- `src/style.css` layout and visual styling.
## Submitting Bids
When in mock mode, bid submissions simulate latency and always succeed.
When in live mode, ensure the coordinator exposes `/v1/marketplace/offers`, `/v1/marketplace/stats`, and `/v1/marketplace/bids` endpoints compatible with the JSON shapes defined in `src/lib/api.ts`.

View File

@@ -0,0 +1,95 @@
# Pool Hub
## Purpose & Scope
Matchmaking gateway between coordinator job requests and available miners. See `docs/bootstrap/pool_hub.md` for architectural guidance.
## Development Setup
- Create a Python virtual environment under `apps/pool-hub/.venv`.
- Install FastAPI, Redis (optional), and PostgreSQL client dependencies once requirements are defined.
- Implement routers and registry as described in the bootstrap document.
## SLA Monitoring and Billing Integration
Pool-Hub now includes comprehensive SLA monitoring and billing integration with coordinator-api:
### SLA Metrics
- **Miner Uptime**: Tracks miner availability based on heartbeat intervals
- **Response Time**: Monitors average response time from match results
- **Job Completion Rate**: Tracks successful vs failed job outcomes
- **Capacity Availability**: Monitors overall pool capacity utilization
### SLA Thresholds
Default thresholds (configurable in settings):
- Uptime: 95%
- Response Time: 1000ms
- Completion Rate: 90%
- Capacity Availability: 80%
### Billing Integration
Pool-Hub integrates with coordinator-api's billing system to:
- Record usage data (gpu_hours, api_calls, compute_hours)
- Sync miner usage to tenant billing
- Generate invoices via coordinator-api
- Track billing metrics and costs
### API Endpoints
SLA and billing endpoints are available under `/sla/`:
- `GET /sla/metrics/{miner_id}` - Get SLA metrics for a miner
- `GET /sla/metrics` - Get SLA metrics across all miners
- `GET /sla/violations` - Get SLA violations
- `POST /sla/metrics/collect` - Trigger SLA metrics collection
- `GET /sla/capacity/snapshots` - Get capacity planning snapshots
- `GET /sla/capacity/forecast` - Get capacity forecast
- `GET /sla/capacity/recommendations` - Get scaling recommendations
- `GET /sla/billing/usage` - Get billing usage data
- `POST /sla/billing/sync` - Trigger billing sync with coordinator-api
### Configuration
Add to `.env`:
```bash
# Coordinator-API Billing Integration
COORDINATOR_BILLING_URL=http://localhost:8011
COORDINATOR_API_KEY=your_api_key_here
# SLA Configuration
SLA_UPTIME_THRESHOLD=95.0
SLA_RESPONSE_TIME_THRESHOLD=1000.0
SLA_COMPLETION_RATE_THRESHOLD=90.0
SLA_CAPACITY_THRESHOLD=80.0
# Capacity Planning
CAPACITY_FORECAST_HOURS=168
CAPACITY_ALERT_THRESHOLD_PCT=80.0
# Billing Sync
BILLING_SYNC_INTERVAL_HOURS=1
# SLA Collection
SLA_COLLECTION_INTERVAL_SECONDS=300
```
### Database Migration
Run the database migration to add SLA and capacity tables:
```bash
cd apps/pool-hub
alembic upgrade head
```
### Testing
Run tests for SLA and billing integration:
```bash
cd apps/pool-hub
pytest tests/test_sla_collector.py
pytest tests/test_billing_integration.py
pytest tests/test_sla_endpoints.py
pytest tests/test_integration_coordinator.py
```

View File

@@ -0,0 +1,13 @@
# Mining Applications
Mining and validation services.
## Applications
- [Miner](miner.md) - Mining and block validation services
## Features
- Block validation
- Proof of Authority mining
- Reward claiming

211
docs/apps/mining/miner.md Normal file
View File

@@ -0,0 +1,211 @@
# Miner
## Status
✅ Operational
## Overview
Mining and block validation service for the AITBC blockchain using Proof-of-Authority consensus.
## Architecture
### Core Components
- **Block Validator**: Validates blocks from the network
- **Block Proposer**: Proposes new blocks (for authorized proposers)
- **Transaction Validator**: Validates transactions before inclusion
- **Reward Claimer**: Claims mining rewards
- **Sync Manager**: Manages blockchain synchronization
## Quick Start (End Users)
### Prerequisites
- Python 3.13+
- Access to blockchain RPC endpoint
- Valid proposer credentials (if proposing blocks)
### Installation
```bash
cd /opt/aitbc/apps/miner
.venv/bin/pip install -r requirements.txt
```
### Configuration
Set environment variables in `.env`:
```bash
BLOCKCHAIN_RPC_URL=http://localhost:8006
PROPOSER_ID=your-proposer-id
PROPOSER_PRIVATE_KEY=encrypted-key
MINING_ENABLED=true
VALIDATION_ENABLED=true
```
### Running the Service
```bash
.venv/bin/python main.py
```
## Developer Guide
### Development Setup
1. Clone the repository
2. Create virtual environment: `python -m venv .venv`
3. Install dependencies: `pip install -r requirements.txt`
4. Configure blockchain RPC endpoint
5. Configure proposer credentials (if proposing)
6. Run tests: `pytest tests/`
### Project Structure
```
miner/
├── src/
│ ├── block_validator/ # Block validation
│ ├── block_proposer/ # Block proposal
│ ├── transaction_validator/ # Transaction validation
│ ├── reward_claimer/ # Reward claiming
│ └── sync_manager/ # Sync management
├── tests/ # Test suite
└── pyproject.toml # Project configuration
```
### Testing
```bash
# Run all tests
pytest tests/
# Run block validator tests
pytest tests/test_validator.py
# Run block proposer tests
pytest tests/test_proposer.py
```
## API Reference
### Block Validation
#### Validate Block
```http
POST /api/v1/mining/validate/block
Content-Type: application/json
{
"block": {},
"chain_id": "ait-mainnet"
}
```
#### Get Validation Status
```http
GET /api/v1/mining/validation/status
```
### Block Proposal
#### Propose Block
```http
POST /api/v1/mining/propose/block
Content-Type: application/json
{
"chain_id": "ait-mainnet",
"transactions": [{}],
"timestamp": "2024-01-01T00:00:00Z"
}
```
#### Get Proposal Status
```http
GET /api/v1/mining/proposal/status
```
### Transaction Validation
#### Validate Transaction
```http
POST /api/v1/mining/validate/transaction
Content-Type: application/json
{
"transaction": {},
"chain_id": "ait-mainnet"
}
```
#### Get Validation Queue
```http
GET /api/v1/mining/validation/queue?limit=100
```
### Reward Claiming
#### Claim Reward
```http
POST /api/v1/mining/rewards/claim
Content-Type: application/json
{
"block_height": 1000,
"proposer_id": "string"
}
```
#### Get Reward History
```http
GET /api/v1/mining/rewards/history?proposer_id=string
```
### Sync Management
#### Get Sync Status
```http
GET /api/v1/mining/sync/status
```
#### Trigger Sync
```http
POST /api/v1/mining/sync/trigger
Content-Type: application/json
{
"from_height": 1000,
"to_height": 2000
}
```
## Configuration
### Environment Variables
- `BLOCKCHAIN_RPC_URL`: Blockchain RPC endpoint
- `PROPOSER_ID`: Proposer identifier
- `PROPOSER_PRIVATE_KEY`: Encrypted proposer private key
- `MINING_ENABLED`: Enable block proposal
- `VALIDATION_ENABLED`: Enable block validation
- `SYNC_INTERVAL`: Sync interval in seconds
### Consensus Parameters
- **Block Time**: Time between blocks (default: 10s)
- **Max Transactions**: Maximum transactions per block
- **Block Size**: Maximum block size in bytes
### Validation Rules
- **Signature Validation**: Validate block signatures
- **Transaction Validation**: Validate transaction format
- **State Validation**: Validate state transitions
## Troubleshooting
**Block validation failed**: Check block signature and state transitions.
**Proposal rejected**: Verify proposer authorization and block validity.
**Sync not progressing**: Check blockchain RPC connectivity and network status.
**Reward claim failed**: Verify proposer ID and block height.
## Security Notes
- Secure proposer private key storage
- Validate all blocks before acceptance
- Monitor for double-spending attacks
- Implement rate limiting for proposal
- Regularly audit mining operations
- Use secure key management

View File

@@ -0,0 +1,17 @@
# Plugin Applications
Plugin system for extending AITBC functionality.
## Applications
- [Plugin Analytics](plugin-analytics.md) - Analytics plugin
- [Plugin Marketplace](plugin-marketplace.md) - Marketplace plugin
- [Plugin Registry](plugin-registry.md) - Plugin registry
- [Plugin Security](plugin-security.md) - Security plugin
## Features
- Plugin discovery and registration
- Plugin marketplace
- Analytics integration
- Security scanning

View File

@@ -0,0 +1,214 @@
# Plugin Analytics
## Status
✅ Operational
## Overview
Analytics plugin for collecting, processing, and analyzing data from various AITBC components and services.
## Architecture
### Core Components
- **Data Collector**: Collects data from services and plugins
- **Data Processor**: Processes and normalizes collected data
- **Analytics Engine**: Performs analytics and generates insights
- **Report Generator**: Generates reports and visualizations
- **Storage Manager**: Manages data storage and retention
## Quick Start (End Users)
### Prerequisites
- Python 3.13+
- PostgreSQL database
- Access to service metrics endpoints
### Installation
```bash
cd /opt/aitbc/apps/plugin-analytics
.venv/bin/pip install -r requirements.txt
```
### Configuration
Set environment variables in `.env`:
```bash
DATABASE_URL=postgresql://user:pass@localhost/analytics
COLLECTION_INTERVAL=300
RETENTION_DAYS=90
```
### Running the Service
```bash
.venv/bin/python main.py
```
## Developer Guide
### Development Setup
1. Clone the repository
2. Create virtual environment: `python -m venv .venv`
3. Install dependencies: `pip install -r requirements.txt`
4. Set up database
5. Configure data sources
6. Run tests: `pytest tests/`
### Project Structure
```
plugin-analytics/
├── src/
│ ├── data_collector/ # Data collection
│ ├── data_processor/ # Data processing
│ ├── analytics_engine/ # Analytics engine
│ ├── report_generator/ # Report generation
│ └── storage_manager/ # Storage management
├── tests/ # Test suite
└── pyproject.toml # Project configuration
```
### Testing
```bash
# Run all tests
pytest tests/
# Run data collector tests
pytest tests/test_collector.py
# Run analytics engine tests
pytest tests/test_analytics.py
```
## API Reference
### Data Collection
#### Start Collection
```http
POST /api/v1/analytics/collection/start
Content-Type: application/json
{
"data_source": "string",
"interval": 300
}
```
#### Stop Collection
```http
POST /api/v1/analytics/collection/stop
Content-Type: application/json
{
"collection_id": "string"
}
```
#### Get Collection Status
```http
GET /api/v1/analytics/collection/status
```
### Analytics
#### Run Analysis
```http
POST /api/v1/analytics/analyze
Content-Type: application/json
{
"analysis_type": "trend|anomaly|correlation",
"data_source": "string",
"time_range": "1h|1d|1w"
}
```
#### Get Analysis Results
```http
GET /api/v1/analytics/results/{analysis_id}
```
### Reports
#### Generate Report
```http
POST /api/v1/analytics/reports/generate
Content-Type: application/json
{
"report_type": "summary|detailed|custom",
"data_source": "string",
"time_range": "1d|1w|1m"
}
```
#### Get Report
```http
GET /api/v1/analytics/reports/{report_id}
```
#### List Reports
```http
GET /api/v1/analytics/reports?limit=10
```
### Data Management
#### Query Data
```http
POST /api/v1/analytics/data/query
Content-Type: application/json
{
"data_source": "string",
"filters": {},
"time_range": "1h"
}
```
#### Export Data
```http
POST /api/v1/analytics/data/export
Content-Type: application/json
{
"data_source": "string",
"format": "csv|json",
"time_range": "1d"
}
```
## Configuration
### Environment Variables
- `DATABASE_URL`: PostgreSQL connection string
- `COLLECTION_INTERVAL`: Data collection interval (default: 300s)
- `RETENTION_DAYS`: Data retention period (default: 90 days)
- `MAX_BATCH_SIZE`: Maximum batch size for processing
### Data Sources
- **Blockchain Metrics**: Blockchain node metrics
- **Exchange Data**: Exchange trading data
- **Agent Activity**: Agent coordination data
- **System Metrics**: System performance metrics
### Analysis Types
- **Trend Analysis**: Identify trends over time
- **Anomaly Detection**: Detect unusual patterns
- **Correlation Analysis**: Find correlations between metrics
## Troubleshooting
**Data not collecting**: Check data source connectivity and configuration.
**Analysis not running**: Verify data availability and analysis parameters.
**Report generation failed**: Check data completeness and report configuration.
**Storage full**: Review retention policy and data growth rate.
## Security Notes
- Secure database access credentials
- Implement data encryption at rest
- Validate all data inputs
- Implement access controls for sensitive data
- Regularly audit data access logs
- Comply with data retention policies

View File

@@ -0,0 +1,223 @@
# Plugin Marketplace
## Status
✅ Operational
## Overview
Marketplace plugin for discovering, installing, and managing AITBC plugins and extensions.
## Architecture
### Core Components
- **Plugin Catalog**: Catalog of available plugins
- **Plugin Installer**: Handles plugin installation and updates
- **Dependency Manager**: Manages plugin dependencies
- **Version Manager**: Handles plugin versioning
- **License Manager**: Manages plugin licenses
## Quick Start (End Users)
### Prerequisites
- Python 3.13+
- Internet access for plugin downloads
- Sufficient disk space for plugins
### Installation
```bash
cd /opt/aitbc/apps/plugin-marketplace
.venv/bin/pip install -r requirements.txt
```
### Configuration
Set environment variables in `.env`:
```bash
PLUGIN_REGISTRY_URL=https://plugins.aitbc.com
INSTALLATION_PATH=/opt/aitbc/plugins
AUTO_UPDATE_ENABLED=false
```
### Running the Service
```bash
.venv/bin/python main.py
```
## Developer Guide
### Development Setup
1. Clone the repository
2. Create virtual environment: `python -m venv .venv`
3. Install dependencies: `pip install -r requirements.txt`
4. Configure plugin registry
5. Run tests: `pytest tests/`
### Project Structure
```
plugin-marketplace/
├── src/
│ ├── plugin_catalog/ # Plugin catalog
│ ├── plugin_installer/ # Plugin installation
│ ├── dependency_manager/ # Dependency management
│ ├── version_manager/ # Version management
│ └── license_manager/ # License management
├── tests/ # Test suite
└── pyproject.toml # Project configuration
```
### Testing
```bash
# Run all tests
pytest tests/
# Run installer tests
pytest tests/test_installer.py
# Run dependency manager tests
pytest tests/test_dependencies.py
```
## API Reference
### Plugin Catalog
#### List Plugins
```http
GET /api/v1/marketplace/plugins?category=analytics&limit=20
```
#### Get Plugin Details
```http
GET /api/v1/marketplace/plugins/{plugin_id}
```
#### Search Plugins
```http
POST /api/v1/marketplace/plugins/search
Content-Type: application/json
{
"query": "analytics",
"filters": {
"category": "string",
"version": "string"
}
}
```
### Plugin Installation
#### Install Plugin
```http
POST /api/v1/marketplace/plugins/install
Content-Type: application/json
{
"plugin_id": "string",
"version": "string",
"auto_dependencies": true
}
```
#### Uninstall Plugin
```http
DELETE /api/v1/marketplace/plugins/{plugin_id}
```
#### Update Plugin
```http
POST /api/v1/marketplace/plugins/{plugin_id}/update
Content-Type: application/json
{
"version": "string"
}
```
#### Get Installation Status
```http
GET /api/v1/marketplace/plugins/{plugin_id}/status
```
### Dependencies
#### Get Plugin Dependencies
```http
GET /api/v1/marketplace/plugins/{plugin_id}/dependencies
```
#### Resolve Dependencies
```http
POST /api/v1/marketplace/dependencies/resolve
Content-Type: application/json
{
"plugin_ids": ["plugin1", "plugin2"]
}
```
### Versions
#### List Plugin Versions
```http
GET /api/v1/marketplace/plugins/{plugin_id}/versions
```
#### Get Version Compatibility
```http
GET /api/v1/marketplace/plugins/{plugin_id}/compatibility?version=1.0.0
```
### Licenses
#### Validate License
```http
POST /api/v1/marketplace/licenses/validate
Content-Type: application/json
{
"plugin_id": "string",
"license_key": "string"
}
```
#### Get License Info
```http
GET /api/v1/marketplace/plugins/{plugin_id}/license
```
## Configuration
### Environment Variables
- `PLUGIN_REGISTRY_URL`: URL for plugin registry
- `INSTALLATION_PATH`: Path for plugin installation
- `AUTO_UPDATE_ENABLED`: Enable automatic plugin updates
- `MAX_CONCURRENT_INSTALLS`: Maximum concurrent installations
### Plugin Categories
- **Analytics**: Data analysis and reporting plugins
- **Security**: Security scanning and monitoring plugins
- **Infrastructure**: Infrastructure management plugins
- **Trading**: Trading and exchange plugins
### Installation Parameters
- **Installation Path**: Directory for plugin installation
- **Dependency Resolution**: Automatic dependency handling
- **Version Constraints**: Version compatibility checks
## Troubleshooting
**Plugin installation failed**: Check plugin compatibility and dependencies.
**License validation failed**: Verify license key and plugin ID.
**Dependency resolution failed**: Check dependency conflicts and versions.
**Auto-update not working**: Verify auto-update configuration and registry connectivity.
## Security Notes
- Validate plugin signatures before installation
- Scan plugins for security vulnerabilities
- Use HTTPS for plugin downloads
- Implement plugin sandboxing
- Regularly update plugins for security patches
- Monitor for malicious plugin behavior

View File

@@ -0,0 +1,217 @@
# Plugin Registry
## Status
✅ Operational
## Overview
Registry plugin for managing plugin metadata, versions, and availability in the AITBC ecosystem.
## Architecture
### Core Components
- **Registry Database**: Stores plugin metadata and versions
- **Metadata Manager**: Manages plugin metadata
- **Version Controller**: Controls plugin versioning
- **Availability Checker**: Checks plugin availability
- **Indexer**: Indexes plugins for search
## Quick Start (End Users)
### Prerequisites
- Python 3.13+
- PostgreSQL database
- Storage for plugin files
### Installation
```bash
cd /opt/aitbc/apps/plugin-registry
.venv/bin/pip install -r requirements.txt
```
### Configuration
Set environment variables in `.env`:
```bash
DATABASE_URL=postgresql://user:pass@localhost/plugin_registry
STORAGE_PATH=/opt/aitbc/plugins/storage
INDEXING_ENABLED=true
```
### Running the Service
```bash
.venv/bin/python main.py
```
## Developer Guide
### Development Setup
1. Clone the repository
2. Create virtual environment: `python -m venv .venv`
3. Install dependencies: `pip install -r requirements.txt`
4. Set up database
5. Configure storage path
6. Run tests: `pytest tests/`
### Project Structure
```
plugin-registry/
├── src/
│ ├── registry_database/ # Registry database
│ ├── metadata_manager/ # Metadata management
│ ├── version_controller/ # Version control
│ ├── availability_checker/ # Availability checking
│ └── indexer/ # Plugin indexing
├── storage/ # Plugin storage
├── tests/ # Test suite
└── pyproject.toml # Project configuration
```
### Testing
```bash
# Run all tests
pytest tests/
# Run registry database tests
pytest tests/test_registry.py
# Run indexer tests
pytest tests/test_indexer.py
```
## API Reference
### Plugin Registration
#### Register Plugin
```http
POST /api/v1/registry/plugins
Content-Type: application/json
{
"plugin_id": "string",
"name": "string",
"version": "1.0.0",
"description": "string",
"author": "string",
"category": "string",
"metadata": {}
}
```
#### Update Plugin Metadata
```http
PUT /api/v1/registry/plugins/{plugin_id}
Content-Type: application/json
{
"version": "1.0.1",
"description": "updated description",
"metadata": {}
}
```
#### Get Plugin Metadata
```http
GET /api/v1/registry/plugins/{plugin_id}
```
### Version Management
#### Add Version
```http
POST /api/v1/registry/plugins/{plugin_id}/versions
Content-Type: application/json
{
"version": "1.1.0",
"changes": ["fix1", "feature2"],
"compatibility": {}
}
```
#### List Versions
```http
GET /api/v1/registry/plugins/{plugin_id}/versions
```
#### Get Latest Version
```http
GET /api/v1/registry/plugins/{plugin_id}/latest
```
### Availability
#### Check Availability
```http
GET /api/v1/registry/plugins/{plugin_id}/availability?version=1.0.0
```
#### Update Availability
```http
POST /api/v1/registry/plugins/{plugin_id}/availability
Content-Type: application/json
{
"version": "1.0.0",
"available": true,
"download_url": "string"
}
```
### Search
#### Search Plugins
```http
POST /api/v1/registry/search
Content-Type: application/json
{
"query": "analytics",
"filters": {
"category": "string",
"author": "string",
"version": "string"
},
"limit": 20
}
```
#### Reindex Plugins
```http
POST /api/v1/registry/reindex
```
## Configuration
### Environment Variables
- `DATABASE_URL`: PostgreSQL connection string
- `STORAGE_PATH`: Path for plugin storage
- `INDEXING_ENABLED`: Enable plugin indexing
- `MAX_METADATA_SIZE`: Maximum metadata size
### Registry Parameters
- **Plugin ID Format**: Format for plugin identifiers
- **Version Schema**: Version numbering scheme
- **Metadata Schema**: Metadata validation schema
### Indexing
- **Full Text Search**: Enable full text search
- **Faceted Search**: Enable faceted search
- **Index Refresh Interval**: Index refresh frequency
## Troubleshooting
**Plugin registration failed**: Validate plugin metadata and version format.
**Version conflict**: Check existing versions and compatibility rules.
**Index not updating**: Verify indexing configuration and database connectivity.
**Storage full**: Review storage usage and cleanup old versions.
## Security Notes
- Validate plugin metadata on registration
- Implement access controls for registry operations
- Scan plugins for security issues
- Regularly audit registry entries
- Implement rate limiting for API endpoints

View File

@@ -0,0 +1,218 @@
# Plugin Security
## Status
✅ Operational
## Overview
Security plugin for scanning, validating, and monitoring AITBC plugins for security vulnerabilities and compliance.
## Architecture
### Core Components
- **Vulnerability Scanner**: Scans plugins for security vulnerabilities
- **Code Analyzer**: Analyzes plugin code for security issues
- **Dependency Checker**: Checks plugin dependencies for vulnerabilities
- **Compliance Validator**: Validates plugin compliance with security standards
- **Policy Engine**: Enforces security policies
## Quick Start (End Users)
### Prerequisites
- Python 3.13+
- Access to plugin files
- Vulnerability database access
### Installation
```bash
cd /opt/aitbc/apps/plugin-security
.venv/bin/pip install -r requirements.txt
```
### Configuration
Set environment variables in `.env`:
```bash
VULN_DB_URL=https://vuln-db.example.com
SCAN_DEPTH=full
COMPLIANCE_STANDARDS=OWASP,SANS
POLICY_FILE=/path/to/policies.yaml
```
### Running the Service
```bash
.venv/bin/python main.py
```
## Developer Guide
### Development Setup
1. Clone the repository
2. Create virtual environment: `python -m venv .venv`
3. Install dependencies: `pip install -r requirements.txt`
4. Configure vulnerability database
5. Configure security policies
6. Run tests: `pytest tests/`
### Project Structure
```
plugin-security/
├── src/
│ ├── vulnerability_scanner/ # Vulnerability scanning
│ ├── code_analyzer/ # Code analysis
│ ├── dependency_checker/ # Dependency checking
│ ├── compliance_validator/ # Compliance validation
│ └── policy_engine/ # Policy enforcement
├── policies/ # Security policies
├── tests/ # Test suite
└── pyproject.toml # Project configuration
```
### Testing
```bash
# Run all tests
pytest tests/
# Run vulnerability scanner tests
pytest tests/test_scanner.py
# Run compliance validator tests
pytest tests/test_compliance.py
```
## API Reference
### Vulnerability Scanning
#### Scan Plugin
```http
POST /api/v1/security/scan
Content-Type: application/json
{
"plugin_id": "string",
"version": "1.0.0",
"scan_depth": "quick|full",
"scan_types": ["code", "dependencies", "configuration"]
}
```
#### Get Scan Results
```http
GET /api/v1/security/scan/{scan_id}
```
#### Get Scan History
```http
GET /api/v1/security/scan/history?plugin_id=string
```
### Code Analysis
#### Analyze Code
```http
POST /api/v1/security/analyze
Content-Type: application/json
{
"plugin_id": "string",
"code_path": "/path/to/code",
"analysis_types": ["sast", "secrets", "quality"]
}
```
#### Get Analysis Report
```http
GET /api/v1/security/analyze/{analysis_id}
```
### Dependency Checking
#### Check Dependencies
```http
POST /api/v1/security/dependencies/check
Content-Type: application/json
{
"plugin_id": "string",
"dependencies": [{"name": "string", "version": "string"}]
}
```
#### Get Vulnerability Report
```http
GET /api/v1/security/dependencies/vulnerabilities?plugin_id=string
```
### Compliance Validation
#### Validate Compliance
```http
POST /api/v1/security/compliance/validate
Content-Type: application/json
{
"plugin_id": "string",
"standards": ["OWASP", "SANS"],
"severity": "high|medium|low"
}
```
#### Get Compliance Report
```http
GET /api/v1/security/compliance/report/{validation_id}
```
### Policy Enforcement
#### Check Policy Compliance
```http
POST /api/v1/security/policies/check
Content-Type: application/json
{
"plugin_id": "string",
"policy_name": "string"
}
```
#### List Policies
```http
GET /api/v1/security/policies
```
## Configuration
### Environment Variables
- `VULN_DB_URL`: Vulnerability database URL
- `SCAN_DEPTH`: Default scan depth (quick/full)
- `COMPLIANCE_STANDARDS`: Compliance standards to enforce
- `POLICY_FILE`: Path to security policies file
### Scan Types
- **SAST**: Static Application Security Testing
- **Secrets Detection**: Detect hardcoded secrets
- **Dependency Scanning**: Scan dependencies for vulnerabilities
- **Configuration Analysis**: Analyze configuration files
### Compliance Standards
- **OWASP**: OWASP security standards
- **SANS**: SANS security controls
- **CIS**: CIS benchmarks
## Troubleshooting
**Scan not running**: Check vulnerability database connectivity and plugin accessibility.
**False positives**: Review scan rules and adjust severity thresholds.
**Compliance validation failed**: Review plugin code against compliance standards.
**Policy check failed**: Verify policy configuration and plugin compliance.
## Security Notes
- Regularly update vulnerability database
- Use isolated environment for scanning
- Implement rate limiting for scan requests
- Secure scan results storage
- Regularly audit security policies
- Monitor for security incidents

View File

@@ -0,0 +1,14 @@
# Wallet Applications
Multi-chain wallet services for AITBC.
## Applications
- [Wallet](wallet.md) - Multi-chain wallet with support for multiple blockchains
## Features
- Multi-chain support
- Transaction signing
- Balance tracking
- Address management

View File

@@ -0,0 +1,32 @@
# Wallet Daemon
## Purpose & Scope
Local FastAPI service that manages encrypted keys, signs transactions/receipts, and exposes wallet RPC endpoints. Reference `docs/bootstrap/wallet_daemon.md` for the implementation plan.
## Development Setup
- Create a Python virtual environment under `apps/wallet-daemon/.venv` or use Poetry.
- Install dependencies via Poetry (preferred):
```bash
poetry install
```
- Copy/create `.env` and configure coordinator access:
```bash
cp .env.example .env # create file if missing
```
- `COORDINATOR_BASE_URL` (default `http://localhost:8011`)
- `COORDINATOR_API_KEY` (development key to verify receipts)
- Run the service locally:
```bash
poetry run uvicorn app.main:app --host 127.0.0.2 --port 8071 --reload
```
- REST receipt endpoints:
- `GET /v1/receipts/{job_id}` (latest receipt + signature validations)
- `GET /v1/receipts/{job_id}/history` (full history + validations)
- JSON-RPC interface (`POST /rpc`):
- Method `receipts.verify_latest`
- Method `receipts.verify_history`
- Keystore scaffolding:
- `KeystoreService` uses Argon2id + XChaCha20-Poly1305 via `app/crypto/encryption.py` (in-memory for now).
- Future milestones will add persistent storage and wallet lifecycle routes.