Compare commits
62 Commits
5407ba391a
...
v0.2.3
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
116db87bd2 | ||
|
|
de6e153854 | ||
|
|
a20190b9b8 | ||
|
|
2dafa5dd73 | ||
|
|
f72d6768f8 | ||
|
|
209f1e46f5 | ||
|
|
a510b9bdb4 | ||
|
|
43717b21fb | ||
|
|
d2f7100594 | ||
|
|
6b6653eeae | ||
|
|
8fce67ecf3 | ||
|
|
e2844f44f8 | ||
|
|
bece27ed00 | ||
|
|
a3197bd9ad | ||
|
|
6c0cdc640b | ||
|
|
6e36b453d9 | ||
|
|
ef43a1eecd | ||
|
|
f5b3c8c1bd | ||
|
|
f061051ec4 | ||
|
|
f646bd7ed4 | ||
|
|
0985308331 | ||
|
|
58020b7eeb | ||
|
|
e4e5020a0e | ||
|
|
a9c2ebe3f7 | ||
|
|
e7eecacf9b | ||
| fd3ba4a62d | |||
| 395b87e6f5 | |||
| bda3a99a68 | |||
| 65b5d53b21 | |||
| b43b3aa3da | |||
| 7885a9e749 | |||
| d0d7e8fd5f | |||
| 009dc3ec53 | |||
| c497e1512e | |||
| bc942c0ff9 | |||
| 819a98fe43 | |||
| eec3d2b41f | |||
| 54b310188e | |||
| aec5bd2eaa | |||
| a046296a48 | |||
| 52f413af87 | |||
| d38ba7d074 | |||
| 3010cf6540 | |||
| b55409c356 | |||
| 5ee4f07140 | |||
| baa03cd85c | |||
| e8b3133250 | |||
| 07432b41ad | |||
| 91062a9e1b | |||
| 55bb6ac96f | |||
| ce6d0625e5 | |||
| 2f4fc9c02d | |||
| 747b445157 | |||
| 98409556f2 | |||
| a2216881bd | |||
| 4f0743adf4 | |||
| f2b8d0593e | |||
| 830c4be4f1 | |||
| e14ba03a90 | |||
| cf3536715b | |||
| 376289c4e2 | |||
| e977fc5fcb |
11
.gitignore
vendored
11
.gitignore
vendored
@@ -168,11 +168,7 @@ temp/
|
||||
# ===================
|
||||
# Wallet Files (contain private keys)
|
||||
# ===================
|
||||
*.json
|
||||
home/client/client_wallet.json
|
||||
home/genesis_wallet.json
|
||||
home/miner/miner_wallet.json
|
||||
|
||||
# Specific wallet and private key JSON files (contain private keys)
|
||||
# ===================
|
||||
# Project Specific
|
||||
# ===================
|
||||
@@ -306,7 +302,6 @@ logs/
|
||||
*.db
|
||||
*.sqlite
|
||||
wallet*.json
|
||||
keystore/
|
||||
certificates/
|
||||
|
||||
# Guardian contract databases (contain spending limits)
|
||||
@@ -320,3 +315,7 @@ guardian_contracts/
|
||||
# Agent protocol data
|
||||
.agent_data/
|
||||
.agent_data/*
|
||||
|
||||
# Operational and setup files
|
||||
results/
|
||||
tools/
|
||||
|
||||
@@ -63,7 +63,7 @@ aitbc marketplace receipts list --limit 3
|
||||
|
||||
# Or via API
|
||||
curl -H "X-Api-Key: client_dev_key_1" \
|
||||
http://127.0.0.1:18000/v1/explorer/receipts?limit=3
|
||||
http://127.0.0.1:8000/v1/explorer/receipts?limit=3
|
||||
|
||||
# Verify blockchain transaction
|
||||
curl -s http://aitbc.keisanki.net/rpc/transactions | \
|
||||
|
||||
262
COMPLETE_TEST_PLAN.md
Normal file
262
COMPLETE_TEST_PLAN.md
Normal file
@@ -0,0 +1,262 @@
|
||||
# AITBC Complete Test Plan - Genesis to Full Operations
|
||||
# Using OpenClaw Skills and Workflow Scripts
|
||||
|
||||
## 🎯 Test Plan Overview
|
||||
Sequential testing from genesis block generation through full AI operations using OpenClaw agents and skills.
|
||||
|
||||
## 📋 Prerequisites Check
|
||||
```bash
|
||||
# Verify OpenClaw is running
|
||||
openclaw status
|
||||
|
||||
# Verify all AITBC services are running
|
||||
systemctl list-units --type=service --state=running | grep aitbc
|
||||
|
||||
# Check wallet access
|
||||
ls -la /var/lib/aitbc/keystore/
|
||||
```
|
||||
|
||||
## 🚀 Phase 1: Genesis Block Generation (OpenClaw)
|
||||
|
||||
### Step 1.1: Pre-flight Setup
|
||||
**Skill**: `openclaw-agent-testing-skill`
|
||||
**Script**: `01_preflight_setup_openclaw.sh`
|
||||
|
||||
```bash
|
||||
# Create OpenClaw session
|
||||
SESSION_ID="genesis-test-$(date +%s)"
|
||||
|
||||
# Test OpenClaw agents first
|
||||
openclaw agent --agent main --message "Execute openclaw-agent-testing-skill with operation: comprehensive, thinking_level: medium" --thinking medium
|
||||
|
||||
# Run pre-flight setup
|
||||
/opt/aitbc/scripts/workflow-openclaw/01_preflight_setup_openclaw.sh
|
||||
```
|
||||
|
||||
### Step 1.2: Genesis Authority Setup
|
||||
**Skill**: `aitbc-basic-operations-skill`
|
||||
**Script**: `02_genesis_authority_setup_openclaw.sh`
|
||||
|
||||
```bash
|
||||
# Setup genesis node using OpenClaw
|
||||
openclaw agent --agent main --message "Execute aitbc-basic-operations-skill to setup genesis authority, create genesis block, and initialize blockchain services" --thinking medium
|
||||
|
||||
# Run genesis setup script
|
||||
/opt/aitbc/scripts/workflow-openclaw/02_genesis_authority_setup_openclaw.sh
|
||||
```
|
||||
|
||||
### Step 1.3: Verify Genesis Block
|
||||
**Skill**: `aitbc-transaction-processor`
|
||||
|
||||
```bash
|
||||
# Verify genesis block creation
|
||||
openclaw agent --agent main --message "Execute aitbc-transaction-processor to verify genesis block, check block height 0, and validate chain state" --thinking medium
|
||||
|
||||
# Manual verification
|
||||
curl -s http://localhost:8006/rpc/head | jq '.height'
|
||||
```
|
||||
|
||||
## 🔗 Phase 2: Follower Node Setup
|
||||
|
||||
### Step 2.1: Follower Node Configuration
|
||||
**Skill**: `aitbc-basic-operations-skill`
|
||||
**Script**: `03_follower_node_setup_openclaw.sh`
|
||||
|
||||
```bash
|
||||
# Setup follower node (aitbc1)
|
||||
openclaw agent --agent main --message "Execute aitbc-basic-operations-skill to setup follower node, connect to genesis, and establish sync" --thinking medium
|
||||
|
||||
# Run follower setup (from aitbc, targets aitbc1)
|
||||
/opt/aitbc/scripts/workflow-openclaw/03_follower_node_setup_openclaw.sh
|
||||
```
|
||||
|
||||
### Step 2.2: Verify Cross-Node Sync
|
||||
**Skill**: `openclaw-agent-communicator`
|
||||
|
||||
```bash
|
||||
# Test cross-node communication
|
||||
openclaw agent --agent main --message "Execute openclaw-agent-communicator to verify aitbc1 sync with genesis node" --thinking medium
|
||||
|
||||
# Check sync status
|
||||
ssh aitbc1 'curl -s http://localhost:8006/rpc/head | jq ".height"'
|
||||
```
|
||||
|
||||
## 💰 Phase 3: Wallet Operations
|
||||
|
||||
### Step 3.1: Cross-Node Wallet Creation
|
||||
**Skill**: `aitbc-wallet-manager`
|
||||
**Script**: `04_wallet_operations_openclaw.sh`
|
||||
|
||||
```bash
|
||||
# Create wallets on both nodes
|
||||
openclaw agent --agent main --message "Execute aitbc-wallet-manager to create cross-node wallets and establish wallet infrastructure" --thinking medium
|
||||
|
||||
# Run wallet operations
|
||||
/opt/aitbc/scripts/workflow-openclaw/04_wallet_operations_openclaw.sh
|
||||
```
|
||||
|
||||
### Step 3.2: Fund Wallets & Initial Transactions
|
||||
**Skill**: `aitbc-transaction-processor`
|
||||
|
||||
```bash
|
||||
# Fund wallets from genesis
|
||||
openclaw agent --agent main --message "Execute aitbc-transaction-processor to fund wallets and execute initial cross-node transactions" --thinking medium
|
||||
|
||||
# Verify transactions
|
||||
curl -s http://localhost:8006/rpc/balance/<wallet_address>
|
||||
```
|
||||
|
||||
## 🤖 Phase 4: AI Operations Setup
|
||||
|
||||
### Step 4.1: Coordinator API Testing
|
||||
**Skill**: `aitbc-ai-operator`
|
||||
|
||||
```bash
|
||||
# Test AI coordinator functionality
|
||||
openclaw agent --agent main --message "Execute aitbc-ai-operator to test coordinator API, job submission, and AI service integration" --thinking medium
|
||||
|
||||
# Test API endpoints
|
||||
curl -s http://localhost:8000/health
|
||||
curl -s http://localhost:8000/docs
|
||||
```
|
||||
|
||||
### Step 4.2: GPU Marketplace Setup
|
||||
**Skill**: `aitbc-marketplace-participant`
|
||||
|
||||
```bash
|
||||
# Initialize GPU marketplace
|
||||
openclaw agent --agent main --message "Execute aitbc-marketplace-participant to setup GPU marketplace, register providers, and prepare for AI jobs" --thinking medium
|
||||
|
||||
# Verify marketplace status
|
||||
curl -s http://localhost:8000/api/marketplace/stats
|
||||
```
|
||||
|
||||
## 🧪 Phase 5: Complete AI Workflow Testing
|
||||
|
||||
### Step 5.1: Ollama GPU Testing
|
||||
**Skill**: `ollama-gpu-testing-skill`
|
||||
**Script**: Reference `ollama-gpu-test-openclaw.md`
|
||||
|
||||
```bash
|
||||
# Execute complete Ollama GPU test
|
||||
openclaw agent --agent main --message "Execute ollama-gpu-testing-skill with complete end-to-end test: client submission → GPU processing → blockchain recording" --thinking high
|
||||
|
||||
# Monitor job progress
|
||||
curl -s http://localhost:8000/api/jobs
|
||||
```
|
||||
|
||||
### Step 5.2: Advanced AI Operations
|
||||
**Skill**: `aitbc-ai-operations-skill`
|
||||
**Script**: `06_advanced_ai_workflow_openclaw.sh`
|
||||
|
||||
```bash
|
||||
# Run advanced AI workflow
|
||||
openclaw agent --agent main --message "Execute aitbc-ai-operations-skill with advanced AI job processing, multi-modal RL, and agent coordination" --thinking high
|
||||
|
||||
# Execute advanced workflow script
|
||||
/opt/aitbc/scripts/workflow-openclaw/06_advanced_ai_workflow_openclaw.sh
|
||||
```
|
||||
|
||||
## 🔄 Phase 6: Agent Coordination Testing
|
||||
|
||||
### Step 6.1: Multi-Agent Coordination
|
||||
**Skill**: `openclaw-agent-communicator`
|
||||
**Script**: `07_enhanced_agent_coordination.sh`
|
||||
|
||||
```bash
|
||||
# Test agent coordination
|
||||
openclaw agent --agent main --message "Execute openclaw-agent-communicator to establish multi-agent coordination and cross-node agent messaging" --thinking high
|
||||
|
||||
# Run coordination script
|
||||
/opt/aitbc/scripts/workflow-openclaw/07_enhanced_agent_coordination.sh
|
||||
```
|
||||
|
||||
### Step 6.2: AI Economics Testing
|
||||
**Skill**: `aitbc-marketplace-participant`
|
||||
**Script**: `08_ai_economics_masters.sh`
|
||||
|
||||
```bash
|
||||
# Test AI economics and marketplace dynamics
|
||||
openclaw agent --agent main --message "Execute aitbc-marketplace-participant to test AI economics, pricing models, and marketplace dynamics" --thinking high
|
||||
|
||||
# Run economics test
|
||||
/opt/aitbc/scripts/workflow-openclaw/08_ai_economics_masters.sh
|
||||
```
|
||||
|
||||
## 📊 Phase 7: Complete Integration Test
|
||||
|
||||
### Step 7.1: End-to-End Workflow
|
||||
**Script**: `05_complete_workflow_openclaw.sh`
|
||||
|
||||
```bash
|
||||
# Execute complete workflow
|
||||
openclaw agent --agent main --message "Execute complete end-to-end AITBC workflow: genesis → nodes → wallets → AI operations → marketplace → economics" --thinking high
|
||||
|
||||
# Run complete workflow
|
||||
/opt/aitbc/scripts/workflow-openclaw/05_complete_workflow_openclaw.sh
|
||||
```
|
||||
|
||||
### Step 7.2: Performance & Stress Testing
|
||||
**Skill**: `openclaw-agent-testing-skill`
|
||||
|
||||
```bash
|
||||
# Stress test the system
|
||||
openclaw agent --agent main --message "Execute openclaw-agent-testing-skill with operation: comprehensive, test_duration: 300, concurrent_agents: 3" --thinking high
|
||||
```
|
||||
|
||||
## ✅ Verification Checklist
|
||||
|
||||
### After Each Phase:
|
||||
- [ ] Services running: `systemctl status aitbc-*`
|
||||
- [ ] Blockchain syncing: Check block heights
|
||||
- [ ] API responding: Health endpoints
|
||||
- [ ] Wallets funded: Balance checks
|
||||
- [ ] Agent communication: OpenClaw logs
|
||||
|
||||
### Final Verification:
|
||||
- [ ] Genesis block height > 0
|
||||
- [ ] Follower node synced
|
||||
- [ ] Cross-node transactions successful
|
||||
- [ ] AI jobs processing
|
||||
- [ ] Marketplace active
|
||||
- [ ] All agents communicating
|
||||
|
||||
## 🚨 Troubleshooting
|
||||
|
||||
### Common Issues:
|
||||
1. **OpenClaw not responding**: Check gateway status
|
||||
2. **Services not starting**: Check logs with `journalctl -u aitbc-*`
|
||||
3. **Sync issues**: Verify network connectivity between nodes
|
||||
4. **Wallet problems**: Check keystore permissions
|
||||
5. **AI jobs failing**: Verify GPU availability and Ollama status
|
||||
|
||||
### Recovery Commands:
|
||||
```bash
|
||||
# Reset OpenClaw session
|
||||
SESSION_ID="recovery-$(date +%s)"
|
||||
|
||||
# Restart all services
|
||||
systemctl restart aitbc-*
|
||||
|
||||
# Reset blockchain (if needed)
|
||||
rm -rf /var/lib/aitbc/data/ait-mainnet/*
|
||||
# Then re-run Phase 1
|
||||
```
|
||||
|
||||
## 📈 Success Metrics
|
||||
|
||||
### Expected Results:
|
||||
- Genesis block created and validated
|
||||
- 2+ nodes syncing properly
|
||||
- Cross-node transactions working
|
||||
- AI jobs submitting and completing
|
||||
- Marketplace active with providers
|
||||
- Agent coordination established
|
||||
- End-to-end workflow successful
|
||||
|
||||
### Performance Targets:
|
||||
- Block production: Every 10 seconds
|
||||
- Transaction confirmation: < 30 seconds
|
||||
- AI job completion: < 2 minutes
|
||||
- Agent response time: < 5 seconds
|
||||
- Cross-node sync: < 1 minute
|
||||
142
aitbc-blockchain-rpc-code-map.md
Normal file
142
aitbc-blockchain-rpc-code-map.md
Normal file
@@ -0,0 +1,142 @@
|
||||
# AITBC Blockchain RPC Service Code Map
|
||||
|
||||
## Service Configuration
|
||||
**File**: `/etc/systemd/system/aitbc-blockchain-rpc.service`
|
||||
**Entry Point**: `python3 -m uvicorn aitbc_chain.app:app --host ${rpc_bind_host} --port ${rpc_bind_port}`
|
||||
**Working Directory**: `/opt/aitbc/apps/blockchain-node`
|
||||
**Environment File**: `/etc/aitbc/blockchain.env`
|
||||
|
||||
## Application Structure
|
||||
|
||||
### 1. Main Entry Point: `app.py`
|
||||
**Location**: `/opt/aitbc/apps/blockchain-node/src/aitbc_chain/app.py`
|
||||
|
||||
#### Key Components:
|
||||
- **FastAPI App**: `create_app()` function
|
||||
- **Lifespan Manager**: `async def lifespan(app: FastAPI)`
|
||||
- **Middleware**: RateLimitMiddleware, RequestLoggingMiddleware
|
||||
- **Routers**: rpc_router, websocket_router, metrics_router
|
||||
|
||||
#### Startup Sequence (lifespan function):
|
||||
1. `init_db()` - Initialize database
|
||||
2. `init_mempool()` - Initialize mempool
|
||||
3. `create_backend()` - Create gossip backend
|
||||
4. `await gossip_broker.set_backend(backend)` - Set up gossip broker
|
||||
5. **PoA Proposer** (if enabled):
|
||||
- Check `settings.enable_block_production and settings.proposer_id`
|
||||
- Create `PoAProposer` instance
|
||||
- Call `asyncio.create_task(proposer.start())`
|
||||
|
||||
### 2. RPC Router: `rpc/router.py`
|
||||
**Location**: `/opt/aitbc/apps/blockchain-node/src/aitbc_chain/rpc/router.py`
|
||||
|
||||
#### Key Endpoints:
|
||||
- `GET /rpc/head` - Returns current chain head (404 when no blocks exist)
|
||||
- `GET /rpc/mempool` - Returns pending transactions (200 OK)
|
||||
- `GET /rpc/blocks/{height}` - Returns block by height
|
||||
- `POST /rpc/transaction` - Submit transaction
|
||||
- `GET /rpc/blocks-range` - Get blocks in height range
|
||||
|
||||
### 3. Gossip System: `gossip/broker.py`
|
||||
**Location**: `/opt/aitbc/apps/blockchain-node/src/aitbc_chain/gossip/broker.py`
|
||||
|
||||
#### Backend Types:
|
||||
- `InMemoryGossipBackend` - Local memory backend (currently used)
|
||||
- `BroadcastGossipBackend` - Network broadcast backend
|
||||
|
||||
#### Key Functions:
|
||||
- `create_backend(backend_type, broadcast_url)` - Creates backend instance
|
||||
- `gossip_broker.set_backend(backend)` - Sets active backend
|
||||
|
||||
### 4. Chain Sync System: `chain_sync.py`
|
||||
**Location**: `/opt/aitbc/apps/blockchain-node/src/aitbc_chain/chain_sync.py`
|
||||
|
||||
#### ChainSyncService Class:
|
||||
- **Purpose**: Synchronizes blocks between nodes
|
||||
- **Key Methods**:
|
||||
- `async def start()` - Starts sync service
|
||||
- `async def _broadcast_blocks()` - **MONITORING SOURCE**
|
||||
- `async def _receive_blocks()` - Receives blocks from Redis
|
||||
|
||||
#### Monitoring Code (_broadcast_blocks method):
|
||||
```python
|
||||
async def _broadcast_blocks(self):
|
||||
"""Broadcast local blocks to other nodes"""
|
||||
import aiohttp
|
||||
|
||||
last_broadcast_height = 0
|
||||
retry_count = 0
|
||||
max_retries = 5
|
||||
base_delay = 2
|
||||
|
||||
while not self._stop_event.is_set():
|
||||
try:
|
||||
# Get current head from local RPC
|
||||
async with aiohttp.ClientSession() as session:
|
||||
async with session.get(f"http://{self.source_host}:{self.source_port}/rpc/head") as resp:
|
||||
if resp.status == 200:
|
||||
head_data = await resp.json()
|
||||
current_height = head_data.get('height', 0)
|
||||
|
||||
# Reset retry count on successful connection
|
||||
retry_count = 0
|
||||
```
|
||||
|
||||
### 5. PoA Consensus: `consensus/poa.py`
|
||||
**Location**: `/opt/aitbc/apps/blockchain-node/src/aitbc_chain/consensus/poa.py`
|
||||
|
||||
#### PoAProposer Class:
|
||||
- **Purpose**: Proposes blocks in Proof-of-Authority system
|
||||
- **Key Methods**:
|
||||
- `async def start()` - Starts proposer loop
|
||||
- `async def _run_loop()` - Main proposer loop
|
||||
- `def _fetch_chain_head()` - Fetches chain head from database
|
||||
|
||||
### 6. Configuration: `blockchain.env`
|
||||
**Location**: `/etc/aitbc/blockchain.env`
|
||||
|
||||
#### Key Settings:
|
||||
- `rpc_bind_host=0.0.0.0`
|
||||
- `rpc_bind_port=8006`
|
||||
- `gossip_backend=memory` (currently set to memory backend)
|
||||
- `enable_block_production=false` (currently disabled)
|
||||
- `proposer_id=` (currently empty)
|
||||
|
||||
## Monitoring Source Analysis
|
||||
|
||||
### Current Configuration:
|
||||
- **PoA Proposer**: DISABLED (`enable_block_production=false`)
|
||||
- **Gossip Backend**: MEMORY (no network sync)
|
||||
- **ChainSyncService**: NOT EXPLICITLY STARTED
|
||||
|
||||
### Mystery Monitoring:
|
||||
Despite all monitoring sources being disabled, the service still makes requests to:
|
||||
- `GET /rpc/head` (404 Not Found)
|
||||
- `GET /rpc/mempool` (200 OK)
|
||||
|
||||
### Possible Hidden Sources:
|
||||
1. **Built-in Health Check**: The service might have an internal health check mechanism
|
||||
2. **Background Task**: There might be a hidden background task making these requests
|
||||
3. **External Process**: Another process might be making these requests
|
||||
4. **Gossip Backend**: Even the memory backend might have monitoring
|
||||
|
||||
### Network Behavior:
|
||||
- **Source IP**: `10.1.223.1` (LXC gateway)
|
||||
- **Destination**: `localhost:8006` (blockchain RPC)
|
||||
- **Pattern**: Every 10 seconds
|
||||
- **Requests**: `/rpc/head` + `/rpc/mempool`
|
||||
|
||||
## Conclusion
|
||||
|
||||
The monitoring is coming from **within the blockchain RPC service itself**, but the exact source remains unclear after examining all obvious candidates. The most likely explanations are:
|
||||
|
||||
1. **Hidden Health Check**: A built-in health check mechanism not visible in the main code paths
|
||||
2. **Memory Backend Monitoring**: Even the memory backend might have monitoring capabilities
|
||||
3. **Internal Process**: A subprocess or thread within the main process making these requests
|
||||
|
||||
### Recommendations:
|
||||
1. **Accept the monitoring** - It appears to be harmless internal health checking
|
||||
2. **Add authentication** to require API keys for RPC endpoints
|
||||
3. **Modify source code** to remove the hidden monitoring if needed
|
||||
|
||||
**The monitoring is confirmed to be internal to the blockchain RPC service, not external surveillance.**
|
||||
1
aitbc-miner
Symbolic link
1
aitbc-miner
Symbolic link
@@ -0,0 +1 @@
|
||||
/opt/aitbc/cli/miner_cli.py
|
||||
@@ -18,8 +18,8 @@ class AITBCServiceIntegration:
|
||||
"coordinator_api": "http://localhost:8000",
|
||||
"blockchain_rpc": "http://localhost:8006",
|
||||
"exchange_service": "http://localhost:8001",
|
||||
"marketplace": "http://localhost:8014",
|
||||
"agent_registry": "http://localhost:8003"
|
||||
"marketplace": "http://localhost:8002",
|
||||
"agent_registry": "http://localhost:8013"
|
||||
}
|
||||
self.session = None
|
||||
|
||||
|
||||
@@ -123,4 +123,4 @@ async def health_check():
|
||||
|
||||
if __name__ == "__main__":
|
||||
import uvicorn
|
||||
uvicorn.run(app, host="0.0.0.0", port=8004)
|
||||
uvicorn.run(app, host="0.0.0.0", port=8012)
|
||||
|
||||
@@ -142,4 +142,4 @@ async def health_check():
|
||||
|
||||
if __name__ == "__main__":
|
||||
import uvicorn
|
||||
uvicorn.run(app, host="0.0.0.0", port=8003)
|
||||
uvicorn.run(app, host="0.0.0.0", port=8013)
|
||||
|
||||
@@ -1285,4 +1285,4 @@ async def health():
|
||||
}
|
||||
|
||||
if __name__ == "__main__":
|
||||
uvicorn.run(app, host="0.0.0.0", port=8016)
|
||||
uvicorn.run(app, host="0.0.0.0", port=8004)
|
||||
|
||||
106
apps/blockchain-node/poetry.lock
generated
106
apps/blockchain-node/poetry.lock
generated
@@ -1,4 +1,4 @@
|
||||
# This file is automatically @generated by Poetry 2.2.1 and should not be changed by hand.
|
||||
# This file is automatically @generated by Poetry 2.3.2 and should not be changed by hand.
|
||||
|
||||
[[package]]
|
||||
name = "aiosqlite"
|
||||
@@ -403,61 +403,61 @@ markers = {main = "platform_system == \"Windows\" or sys_platform == \"win32\"",
|
||||
|
||||
[[package]]
|
||||
name = "cryptography"
|
||||
version = "46.0.5"
|
||||
version = "46.0.6"
|
||||
description = "cryptography is a package which provides cryptographic recipes and primitives to Python developers."
|
||||
optional = false
|
||||
python-versions = "!=3.9.0,!=3.9.1,>=3.8"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "cryptography-46.0.5-cp311-abi3-macosx_10_9_universal2.whl", hash = "sha256:351695ada9ea9618b3500b490ad54c739860883df6c1f555e088eaf25b1bbaad"},
|
||||
{file = "cryptography-46.0.5-cp311-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:c18ff11e86df2e28854939acde2d003f7984f721eba450b56a200ad90eeb0e6b"},
|
||||
{file = "cryptography-46.0.5-cp311-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:4d7e3d356b8cd4ea5aff04f129d5f66ebdc7b6f8eae802b93739ed520c47c79b"},
|
||||
{file = "cryptography-46.0.5-cp311-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:50bfb6925eff619c9c023b967d5b77a54e04256c4281b0e21336a130cd7fc263"},
|
||||
{file = "cryptography-46.0.5-cp311-abi3-manylinux_2_28_ppc64le.whl", hash = "sha256:803812e111e75d1aa73690d2facc295eaefd4439be1023fefc4995eaea2af90d"},
|
||||
{file = "cryptography-46.0.5-cp311-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:3ee190460e2fbe447175cda91b88b84ae8322a104fc27766ad09428754a618ed"},
|
||||
{file = "cryptography-46.0.5-cp311-abi3-manylinux_2_31_armv7l.whl", hash = "sha256:f145bba11b878005c496e93e257c1e88f154d278d2638e6450d17e0f31e558d2"},
|
||||
{file = "cryptography-46.0.5-cp311-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:e9251e3be159d1020c4030bd2e5f84d6a43fe54b6c19c12f51cde9542a2817b2"},
|
||||
{file = "cryptography-46.0.5-cp311-abi3-manylinux_2_34_ppc64le.whl", hash = "sha256:47fb8a66058b80e509c47118ef8a75d14c455e81ac369050f20ba0d23e77fee0"},
|
||||
{file = "cryptography-46.0.5-cp311-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:4c3341037c136030cb46e4b1e17b7418ea4cbd9dd207e4a6f3b2b24e0d4ac731"},
|
||||
{file = "cryptography-46.0.5-cp311-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:890bcb4abd5a2d3f852196437129eb3667d62630333aacc13dfd470fad3aaa82"},
|
||||
{file = "cryptography-46.0.5-cp311-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:80a8d7bfdf38f87ca30a5391c0c9ce4ed2926918e017c29ddf643d0ed2778ea1"},
|
||||
{file = "cryptography-46.0.5-cp311-abi3-win32.whl", hash = "sha256:60ee7e19e95104d4c03871d7d7dfb3d22ef8a9b9c6778c94e1c8fcc8365afd48"},
|
||||
{file = "cryptography-46.0.5-cp311-abi3-win_amd64.whl", hash = "sha256:38946c54b16c885c72c4f59846be9743d699eee2b69b6988e0a00a01f46a61a4"},
|
||||
{file = "cryptography-46.0.5-cp314-cp314t-macosx_10_9_universal2.whl", hash = "sha256:94a76daa32eb78d61339aff7952ea819b1734b46f73646a07decb40e5b3448e2"},
|
||||
{file = "cryptography-46.0.5-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:5be7bf2fb40769e05739dd0046e7b26f9d4670badc7b032d6ce4db64dddc0678"},
|
||||
{file = "cryptography-46.0.5-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:fe346b143ff9685e40192a4960938545c699054ba11d4f9029f94751e3f71d87"},
|
||||
{file = "cryptography-46.0.5-cp314-cp314t-manylinux_2_28_aarch64.whl", hash = "sha256:c69fd885df7d089548a42d5ec05be26050ebcd2283d89b3d30676eb32ff87dee"},
|
||||
{file = "cryptography-46.0.5-cp314-cp314t-manylinux_2_28_ppc64le.whl", hash = "sha256:8293f3dea7fc929ef7240796ba231413afa7b68ce38fd21da2995549f5961981"},
|
||||
{file = "cryptography-46.0.5-cp314-cp314t-manylinux_2_28_x86_64.whl", hash = "sha256:1abfdb89b41c3be0365328a410baa9df3ff8a9110fb75e7b52e66803ddabc9a9"},
|
||||
{file = "cryptography-46.0.5-cp314-cp314t-manylinux_2_31_armv7l.whl", hash = "sha256:d66e421495fdb797610a08f43b05269e0a5ea7f5e652a89bfd5a7d3c1dee3648"},
|
||||
{file = "cryptography-46.0.5-cp314-cp314t-manylinux_2_34_aarch64.whl", hash = "sha256:4e817a8920bfbcff8940ecfd60f23d01836408242b30f1a708d93198393a80b4"},
|
||||
{file = "cryptography-46.0.5-cp314-cp314t-manylinux_2_34_ppc64le.whl", hash = "sha256:68f68d13f2e1cb95163fa3b4db4bf9a159a418f5f6e7242564fc75fcae667fd0"},
|
||||
{file = "cryptography-46.0.5-cp314-cp314t-manylinux_2_34_x86_64.whl", hash = "sha256:a3d1fae9863299076f05cb8a778c467578262fae09f9dc0ee9b12eb4268ce663"},
|
||||
{file = "cryptography-46.0.5-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:c4143987a42a2397f2fc3b4d7e3a7d313fbe684f67ff443999e803dd75a76826"},
|
||||
{file = "cryptography-46.0.5-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:7d731d4b107030987fd61a7f8ab512b25b53cef8f233a97379ede116f30eb67d"},
|
||||
{file = "cryptography-46.0.5-cp314-cp314t-win32.whl", hash = "sha256:c3bcce8521d785d510b2aad26ae2c966092b7daa8f45dd8f44734a104dc0bc1a"},
|
||||
{file = "cryptography-46.0.5-cp314-cp314t-win_amd64.whl", hash = "sha256:4d8ae8659ab18c65ced284993c2265910f6c9e650189d4e3f68445ef82a810e4"},
|
||||
{file = "cryptography-46.0.5-cp38-abi3-macosx_10_9_universal2.whl", hash = "sha256:4108d4c09fbbf2789d0c926eb4152ae1760d5a2d97612b92d508d96c861e4d31"},
|
||||
{file = "cryptography-46.0.5-cp38-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:7d1f30a86d2757199cb2d56e48cce14deddf1f9c95f1ef1b64ee91ea43fe2e18"},
|
||||
{file = "cryptography-46.0.5-cp38-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:039917b0dc418bb9f6edce8a906572d69e74bd330b0b3fea4f79dab7f8ddd235"},
|
||||
{file = "cryptography-46.0.5-cp38-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:ba2a27ff02f48193fc4daeadf8ad2590516fa3d0adeeb34336b96f7fa64c1e3a"},
|
||||
{file = "cryptography-46.0.5-cp38-abi3-manylinux_2_28_ppc64le.whl", hash = "sha256:61aa400dce22cb001a98014f647dc21cda08f7915ceb95df0c9eaf84b4b6af76"},
|
||||
{file = "cryptography-46.0.5-cp38-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:3ce58ba46e1bc2aac4f7d9290223cead56743fa6ab94a5d53292ffaac6a91614"},
|
||||
{file = "cryptography-46.0.5-cp38-abi3-manylinux_2_31_armv7l.whl", hash = "sha256:420d0e909050490d04359e7fdb5ed7e667ca5c3c402b809ae2563d7e66a92229"},
|
||||
{file = "cryptography-46.0.5-cp38-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:582f5fcd2afa31622f317f80426a027f30dc792e9c80ffee87b993200ea115f1"},
|
||||
{file = "cryptography-46.0.5-cp38-abi3-manylinux_2_34_ppc64le.whl", hash = "sha256:bfd56bb4b37ed4f330b82402f6f435845a5f5648edf1ad497da51a8452d5d62d"},
|
||||
{file = "cryptography-46.0.5-cp38-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:a3d507bb6a513ca96ba84443226af944b0f7f47dcc9a399d110cd6146481d24c"},
|
||||
{file = "cryptography-46.0.5-cp38-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:9f16fbdf4da055efb21c22d81b89f155f02ba420558db21288b3d0035bafd5f4"},
|
||||
{file = "cryptography-46.0.5-cp38-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:ced80795227d70549a411a4ab66e8ce307899fad2220ce5ab2f296e687eacde9"},
|
||||
{file = "cryptography-46.0.5-cp38-abi3-win32.whl", hash = "sha256:02f547fce831f5096c9a567fd41bc12ca8f11df260959ecc7c3202555cc47a72"},
|
||||
{file = "cryptography-46.0.5-cp38-abi3-win_amd64.whl", hash = "sha256:556e106ee01aa13484ce9b0239bca667be5004efb0aabbed28d353df86445595"},
|
||||
{file = "cryptography-46.0.5-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:3b4995dc971c9fb83c25aa44cf45f02ba86f71ee600d81091c2f0cbae116b06c"},
|
||||
{file = "cryptography-46.0.5-pp311-pypy311_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:bc84e875994c3b445871ea7181d424588171efec3e185dced958dad9e001950a"},
|
||||
{file = "cryptography-46.0.5-pp311-pypy311_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:2ae6971afd6246710480e3f15824ed3029a60fc16991db250034efd0b9fb4356"},
|
||||
{file = "cryptography-46.0.5-pp311-pypy311_pp73-manylinux_2_34_aarch64.whl", hash = "sha256:d861ee9e76ace6cf36a6a89b959ec08e7bc2493ee39d07ffe5acb23ef46d27da"},
|
||||
{file = "cryptography-46.0.5-pp311-pypy311_pp73-manylinux_2_34_x86_64.whl", hash = "sha256:2b7a67c9cd56372f3249b39699f2ad479f6991e62ea15800973b956f4b73e257"},
|
||||
{file = "cryptography-46.0.5-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:8456928655f856c6e1533ff59d5be76578a7157224dbd9ce6872f25055ab9ab7"},
|
||||
{file = "cryptography-46.0.5.tar.gz", hash = "sha256:abace499247268e3757271b2f1e244b36b06f8515cf27c4d49468fc9eb16e93d"},
|
||||
{file = "cryptography-46.0.6-cp311-abi3-macosx_10_9_universal2.whl", hash = "sha256:64235194bad039a10bb6d2d930ab3323baaec67e2ce36215fd0952fad0930ca8"},
|
||||
{file = "cryptography-46.0.6-cp311-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:26031f1e5ca62fcb9d1fcb34b2b60b390d1aacaa15dc8b895a9ed00968b97b30"},
|
||||
{file = "cryptography-46.0.6-cp311-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:9a693028b9cbe51b5a1136232ee8f2bc242e4e19d456ded3fa7c86e43c713b4a"},
|
||||
{file = "cryptography-46.0.6-cp311-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:67177e8a9f421aa2d3a170c3e56eca4e0128883cf52a071a7cbf53297f18b175"},
|
||||
{file = "cryptography-46.0.6-cp311-abi3-manylinux_2_28_ppc64le.whl", hash = "sha256:d9528b535a6c4f8ff37847144b8986a9a143585f0540fbcb1a98115b543aa463"},
|
||||
{file = "cryptography-46.0.6-cp311-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:22259338084d6ae497a19bae5d4c66b7ca1387d3264d1c2c0e72d9e9b6a77b97"},
|
||||
{file = "cryptography-46.0.6-cp311-abi3-manylinux_2_31_armv7l.whl", hash = "sha256:760997a4b950ff00d418398ad73fbc91aa2894b5c1db7ccb45b4f68b42a63b3c"},
|
||||
{file = "cryptography-46.0.6-cp311-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:3dfa6567f2e9e4c5dceb8ccb5a708158a2a871052fa75c8b78cb0977063f1507"},
|
||||
{file = "cryptography-46.0.6-cp311-abi3-manylinux_2_34_ppc64le.whl", hash = "sha256:cdcd3edcbc5d55757e5f5f3d330dd00007ae463a7e7aa5bf132d1f22a4b62b19"},
|
||||
{file = "cryptography-46.0.6-cp311-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:d4e4aadb7fc1f88687f47ca20bb7227981b03afaae69287029da08096853b738"},
|
||||
{file = "cryptography-46.0.6-cp311-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:2b417edbe8877cda9022dde3a008e2deb50be9c407eef034aeeb3a8b11d9db3c"},
|
||||
{file = "cryptography-46.0.6-cp311-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:380343e0653b1c9d7e1f55b52aaa2dbb2fdf2730088d48c43ca1c7c0abb7cc2f"},
|
||||
{file = "cryptography-46.0.6-cp311-abi3-win32.whl", hash = "sha256:bcb87663e1f7b075e48c3be3ecb5f0b46c8fc50b50a97cf264e7f60242dca3f2"},
|
||||
{file = "cryptography-46.0.6-cp311-abi3-win_amd64.whl", hash = "sha256:6739d56300662c468fddb0e5e291f9b4d084bead381667b9e654c7dd81705124"},
|
||||
{file = "cryptography-46.0.6-cp314-cp314t-macosx_10_9_universal2.whl", hash = "sha256:2ef9e69886cbb137c2aef9772c2e7138dc581fad4fcbcf13cc181eb5a3ab6275"},
|
||||
{file = "cryptography-46.0.6-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:7f417f034f91dcec1cb6c5c35b07cdbb2ef262557f701b4ecd803ee8cefed4f4"},
|
||||
{file = "cryptography-46.0.6-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:d24c13369e856b94892a89ddf70b332e0b70ad4a5c43cf3e9cb71d6d7ffa1f7b"},
|
||||
{file = "cryptography-46.0.6-cp314-cp314t-manylinux_2_28_aarch64.whl", hash = "sha256:aad75154a7ac9039936d50cf431719a2f8d4ed3d3c277ac03f3339ded1a5e707"},
|
||||
{file = "cryptography-46.0.6-cp314-cp314t-manylinux_2_28_ppc64le.whl", hash = "sha256:3c21d92ed15e9cfc6eb64c1f5a0326db22ca9c2566ca46d845119b45b4400361"},
|
||||
{file = "cryptography-46.0.6-cp314-cp314t-manylinux_2_28_x86_64.whl", hash = "sha256:4668298aef7cddeaf5c6ecc244c2302a2b8e40f384255505c22875eebb47888b"},
|
||||
{file = "cryptography-46.0.6-cp314-cp314t-manylinux_2_31_armv7l.whl", hash = "sha256:8ce35b77aaf02f3b59c90b2c8a05c73bac12cea5b4e8f3fbece1f5fddea5f0ca"},
|
||||
{file = "cryptography-46.0.6-cp314-cp314t-manylinux_2_34_aarch64.whl", hash = "sha256:c89eb37fae9216985d8734c1afd172ba4927f5a05cfd9bf0e4863c6d5465b013"},
|
||||
{file = "cryptography-46.0.6-cp314-cp314t-manylinux_2_34_ppc64le.whl", hash = "sha256:ed418c37d095aeddf5336898a132fba01091f0ac5844e3e8018506f014b6d2c4"},
|
||||
{file = "cryptography-46.0.6-cp314-cp314t-manylinux_2_34_x86_64.whl", hash = "sha256:69cf0056d6947edc6e6760e5f17afe4bea06b56a9ac8a06de9d2bd6b532d4f3a"},
|
||||
{file = "cryptography-46.0.6-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:8e7304c4f4e9490e11efe56af6713983460ee0780f16c63f219984dab3af9d2d"},
|
||||
{file = "cryptography-46.0.6-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:b928a3ca837c77a10e81a814a693f2295200adb3352395fad024559b7be7a736"},
|
||||
{file = "cryptography-46.0.6-cp314-cp314t-win32.whl", hash = "sha256:97c8115b27e19e592a05c45d0dd89c57f81f841cc9880e353e0d3bf25b2139ed"},
|
||||
{file = "cryptography-46.0.6-cp314-cp314t-win_amd64.whl", hash = "sha256:c797e2517cb7880f8297e2c0f43bb910e91381339336f75d2c1c2cbf811b70b4"},
|
||||
{file = "cryptography-46.0.6-cp38-abi3-macosx_10_9_universal2.whl", hash = "sha256:12cae594e9473bca1a7aceb90536060643128bb274fcea0fc459ab90f7d1ae7a"},
|
||||
{file = "cryptography-46.0.6-cp38-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:639301950939d844a9e1c4464d7e07f902fe9a7f6b215bb0d4f28584729935d8"},
|
||||
{file = "cryptography-46.0.6-cp38-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:ed3775295fb91f70b4027aeba878d79b3e55c0b3e97eaa4de71f8f23a9f2eb77"},
|
||||
{file = "cryptography-46.0.6-cp38-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:8927ccfbe967c7df312ade694f987e7e9e22b2425976ddbf28271d7e58845290"},
|
||||
{file = "cryptography-46.0.6-cp38-abi3-manylinux_2_28_ppc64le.whl", hash = "sha256:b12c6b1e1651e42ab5de8b1e00dc3b6354fdfd778e7fa60541ddacc27cd21410"},
|
||||
{file = "cryptography-46.0.6-cp38-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:063b67749f338ca9c5a0b7fe438a52c25f9526b851e24e6c9310e7195aad3b4d"},
|
||||
{file = "cryptography-46.0.6-cp38-abi3-manylinux_2_31_armv7l.whl", hash = "sha256:02fad249cb0e090b574e30b276a3da6a149e04ee2f049725b1f69e7b8351ec70"},
|
||||
{file = "cryptography-46.0.6-cp38-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:7e6142674f2a9291463e5e150090b95a8519b2fb6e6aaec8917dd8d094ce750d"},
|
||||
{file = "cryptography-46.0.6-cp38-abi3-manylinux_2_34_ppc64le.whl", hash = "sha256:456b3215172aeefb9284550b162801d62f5f264a081049a3e94307fe20792cfa"},
|
||||
{file = "cryptography-46.0.6-cp38-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:341359d6c9e68834e204ceaf25936dffeafea3829ab80e9503860dcc4f4dac58"},
|
||||
{file = "cryptography-46.0.6-cp38-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:9a9c42a2723999a710445bc0d974e345c32adfd8d2fac6d8a251fa829ad31cfb"},
|
||||
{file = "cryptography-46.0.6-cp38-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:6617f67b1606dfd9fe4dbfa354a9508d4a6d37afe30306fe6c101b7ce3274b72"},
|
||||
{file = "cryptography-46.0.6-cp38-abi3-win32.whl", hash = "sha256:7f6690b6c55e9c5332c0b59b9c8a3fb232ebf059094c17f9019a51e9827df91c"},
|
||||
{file = "cryptography-46.0.6-cp38-abi3-win_amd64.whl", hash = "sha256:79e865c642cfc5c0b3eb12af83c35c5aeff4fa5c672dc28c43721c2c9fdd2f0f"},
|
||||
{file = "cryptography-46.0.6-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:2ea0f37e9a9cf0df2952893ad145fd9627d326a59daec9b0802480fa3bcd2ead"},
|
||||
{file = "cryptography-46.0.6-pp311-pypy311_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:a3e84d5ec9ba01f8fd03802b2147ba77f0c8f2617b2aff254cedd551844209c8"},
|
||||
{file = "cryptography-46.0.6-pp311-pypy311_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:12f0fa16cc247b13c43d56d7b35287ff1569b5b1f4c5e87e92cc4fcc00cd10c0"},
|
||||
{file = "cryptography-46.0.6-pp311-pypy311_pp73-manylinux_2_34_aarch64.whl", hash = "sha256:50575a76e2951fe7dbd1f56d181f8c5ceeeb075e9ff88e7ad997d2f42af06e7b"},
|
||||
{file = "cryptography-46.0.6-pp311-pypy311_pp73-manylinux_2_34_x86_64.whl", hash = "sha256:90e5f0a7b3be5f40c3a0a0eafb32c681d8d2c181fc2a1bdabe9b3f611d9f6b1a"},
|
||||
{file = "cryptography-46.0.6-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:6728c49e3b2c180ef26f8e9f0a883a2c585638db64cf265b49c9ba10652d430e"},
|
||||
{file = "cryptography-46.0.6.tar.gz", hash = "sha256:27550628a518c5c6c903d84f637fbecf287f6cb9ced3804838a1295dc1fd0759"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
@@ -470,7 +470,7 @@ nox = ["nox[uv] (>=2024.4.15)"]
|
||||
pep8test = ["check-sdist", "click (>=8.0.1)", "mypy (>=1.14)", "ruff (>=0.11.11)"]
|
||||
sdist = ["build (>=1.0.0)"]
|
||||
ssh = ["bcrypt (>=3.1.5)"]
|
||||
test = ["certifi (>=2024)", "cryptography-vectors (==46.0.5)", "pretend (>=0.7)", "pytest (>=7.4.0)", "pytest-benchmark (>=4.0)", "pytest-cov (>=2.10.1)", "pytest-xdist (>=3.5.0)"]
|
||||
test = ["certifi (>=2024)", "cryptography-vectors (==46.0.6)", "pretend (>=0.7)", "pytest (>=7.4.0)", "pytest-benchmark (>=4.0)", "pytest-cov (>=2.10.1)", "pytest-xdist (>=3.5.0)"]
|
||||
test-randomorder = ["pytest-randomly"]
|
||||
|
||||
[[package]]
|
||||
@@ -1955,4 +1955,4 @@ uvloop = ["uvloop"]
|
||||
[metadata]
|
||||
lock-version = "2.1"
|
||||
python-versions = "^3.13"
|
||||
content-hash = "55b974f6c38b7bc0908cf88c1ab4972ffd9f97b398c87d0211c01d95dd0cbe4a"
|
||||
content-hash = "3ce9328b4097f910e55c591307b9e85f9a70ae4f4b21a03d2cab74620e38512a"
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
[tool.poetry]
|
||||
name = "aitbc-blockchain-node"
|
||||
version = "v0.2.2"
|
||||
version = "v0.2.3"
|
||||
description = "AITBC blockchain node service"
|
||||
authors = ["AITBC Team"]
|
||||
packages = [
|
||||
|
||||
@@ -32,8 +32,8 @@ class RateLimitMiddleware(BaseHTTPMiddleware):
|
||||
|
||||
async def dispatch(self, request: Request, call_next):
|
||||
client_ip = request.client.host if request.client else "unknown"
|
||||
# Bypass rate limiting for localhost (sync/health internal traffic)
|
||||
if client_ip in {"127.0.0.1", "::1"}:
|
||||
# Bypass rate limiting for localhost and internal network (sync/health internal traffic)
|
||||
if client_ip in {"127.0.0.1", "::1", "10.1.223.93", "10.1.223.40"}:
|
||||
return await call_next(request)
|
||||
now = time.time()
|
||||
# Clean old entries
|
||||
|
||||
@@ -12,6 +12,15 @@ from typing import Dict, Any, Optional, List
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Import settings for configuration
|
||||
try:
|
||||
from .config import settings
|
||||
except ImportError:
|
||||
# Fallback if settings not available
|
||||
class Settings:
|
||||
blockchain_monitoring_interval_seconds = 10
|
||||
settings = Settings()
|
||||
|
||||
class ChainSyncService:
|
||||
def __init__(self, redis_url: str, node_id: str, rpc_port: int = 8006, leader_host: str = None,
|
||||
source_host: str = "127.0.0.1", source_port: int = None,
|
||||
@@ -70,7 +79,7 @@ class ChainSyncService:
|
||||
last_broadcast_height = 0
|
||||
retry_count = 0
|
||||
max_retries = 5
|
||||
base_delay = 2
|
||||
base_delay = settings.blockchain_monitoring_interval_seconds # Use config setting instead of hardcoded value
|
||||
|
||||
while not self._stop_event.is_set():
|
||||
try:
|
||||
|
||||
@@ -42,6 +42,9 @@ class ChainSettings(BaseSettings):
|
||||
# Block production limits
|
||||
max_block_size_bytes: int = 1_000_000 # 1 MB
|
||||
max_txs_per_block: int = 500
|
||||
|
||||
# Monitoring interval (in seconds)
|
||||
blockchain_monitoring_interval_seconds: int = 60
|
||||
min_fee: int = 0 # Minimum fee to accept into mempool
|
||||
|
||||
# Mempool settings
|
||||
|
||||
@@ -23,6 +23,10 @@ _logger = get_logger(__name__)
|
||||
|
||||
router = APIRouter()
|
||||
|
||||
# Global rate limiter for importBlock
|
||||
_last_import_time = 0
|
||||
_import_lock = asyncio.Lock()
|
||||
|
||||
# Global variable to store the PoA proposer
|
||||
_poa_proposer = None
|
||||
|
||||
@@ -192,8 +196,8 @@ async def get_mempool(chain_id: str = None, limit: int = 100) -> Dict[str, Any]:
|
||||
"count": len(pending_txs)
|
||||
}
|
||||
except Exception as e:
|
||||
_logger.error("Failed to get mempool", extra={"error": str(e)})
|
||||
raise HTTPException(status_code=500, detail=f"Failed to get mempool: {str(e)}")
|
||||
_logger.error(f"Failed to get mempool", extra={"error": str(e)})
|
||||
raise HTTPException(status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, detail=f"Failed to get mempool: {str(e)}")
|
||||
|
||||
|
||||
@router.get("/accounts/{address}", summary="Get account information")
|
||||
@@ -321,3 +325,80 @@ async def moderate_message(message_id: str, moderation_data: dict) -> Dict[str,
|
||||
moderation_data.get("action"),
|
||||
moderation_data.get("reason", "")
|
||||
)
|
||||
|
||||
@router.post("/importBlock", summary="Import a block")
|
||||
async def import_block(block_data: dict) -> Dict[str, Any]:
|
||||
"""Import a block into the blockchain"""
|
||||
global _last_import_time
|
||||
|
||||
async with _import_lock:
|
||||
try:
|
||||
# Rate limiting: max 1 import per second
|
||||
current_time = time.time()
|
||||
time_since_last = current_time - _last_import_time
|
||||
if time_since_last < 1.0: # 1 second minimum between imports
|
||||
await asyncio.sleep(1.0 - time_since_last)
|
||||
|
||||
_last_import_time = time.time()
|
||||
|
||||
with session_scope() as session:
|
||||
# Convert timestamp string to datetime if needed
|
||||
timestamp = block_data.get("timestamp")
|
||||
if isinstance(timestamp, str):
|
||||
try:
|
||||
timestamp = datetime.fromisoformat(timestamp.replace('Z', '+00:00'))
|
||||
except ValueError:
|
||||
# Fallback to current time if parsing fails
|
||||
timestamp = datetime.utcnow()
|
||||
elif timestamp is None:
|
||||
timestamp = datetime.utcnow()
|
||||
|
||||
# Extract height from either 'number' or 'height' field
|
||||
height = block_data.get("number") or block_data.get("height")
|
||||
if height is None:
|
||||
raise ValueError("Block height is required")
|
||||
|
||||
# Check if block already exists to prevent duplicates
|
||||
existing = session.execute(
|
||||
select(Block).where(Block.height == int(height))
|
||||
).scalar_one_or_none()
|
||||
if existing:
|
||||
return {
|
||||
"success": True,
|
||||
"block_number": existing.height,
|
||||
"block_hash": existing.hash,
|
||||
"message": "Block already exists"
|
||||
}
|
||||
|
||||
# Create block from data
|
||||
block = Block(
|
||||
chain_id=block_data.get("chainId", "ait-mainnet"),
|
||||
height=int(height),
|
||||
hash=block_data.get("hash"),
|
||||
parent_hash=block_data.get("parentHash", ""),
|
||||
proposer=block_data.get("miner", ""),
|
||||
timestamp=timestamp,
|
||||
tx_count=len(block_data.get("transactions", [])),
|
||||
state_root=block_data.get("stateRoot"),
|
||||
block_metadata=json.dumps(block_data)
|
||||
)
|
||||
|
||||
session.add(block)
|
||||
session.commit()
|
||||
|
||||
_logger.info(f"Successfully imported block {block.height}")
|
||||
metrics_registry.increment("blocks_imported_total")
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"block_number": block.height,
|
||||
"block_hash": block.hash
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
_logger.error(f"Failed to import block: {e}")
|
||||
metrics_registry.increment("block_import_errors_total")
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
|
||||
detail=f"Failed to import block: {str(e)}"
|
||||
)
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
[tool.poetry]
|
||||
name = "aitbc-coordinator-api"
|
||||
version = "0.1.0"
|
||||
version = "v0.2.3"
|
||||
description = "AITBC Coordinator API service"
|
||||
authors = ["AITBC Team"]
|
||||
packages = [
|
||||
|
||||
@@ -3,7 +3,7 @@ import sys
|
||||
import os
|
||||
|
||||
# Security: Lock sys.path to trusted locations to prevent malicious package shadowing
|
||||
# Keep: site-packages under /opt/aitbc (venv), stdlib paths, and our app directory
|
||||
# Keep: site-packages under /opt/aitbc (venv), stdlib paths, our app directory, and crypto/sdk paths
|
||||
_LOCKED_PATH = []
|
||||
for p in sys.path:
|
||||
if 'site-packages' in p and '/opt/aitbc' in p:
|
||||
@@ -12,7 +12,14 @@ for p in sys.path:
|
||||
_LOCKED_PATH.append(p)
|
||||
elif p.startswith('/opt/aitbc/apps/coordinator-api'): # our app code
|
||||
_LOCKED_PATH.append(p)
|
||||
sys.path = _LOCKED_PATH
|
||||
elif p.startswith('/opt/aitbc/packages/py/aitbc-crypto'): # crypto module
|
||||
_LOCKED_PATH.append(p)
|
||||
elif p.startswith('/opt/aitbc/packages/py/aitbc-sdk'): # sdk module
|
||||
_LOCKED_PATH.append(p)
|
||||
|
||||
# Add crypto and sdk paths to sys.path
|
||||
sys.path.insert(0, '/opt/aitbc/packages/py/aitbc-crypto/src')
|
||||
sys.path.insert(0, '/opt/aitbc/packages/py/aitbc-sdk/src')
|
||||
|
||||
from sqlalchemy.orm import Session
|
||||
from typing import Annotated
|
||||
@@ -241,21 +248,21 @@ def create_app() -> FastAPI:
|
||||
]
|
||||
)
|
||||
|
||||
# API Key middleware (if configured)
|
||||
required_key = os.getenv("COORDINATOR_API_KEY")
|
||||
if required_key:
|
||||
@app.middleware("http")
|
||||
async def api_key_middleware(request: Request, call_next):
|
||||
# Health endpoints are exempt
|
||||
if request.url.path in ("/health", "/v1/health", "/health/live", "/health/ready", "/metrics", "/rate-limit-metrics"):
|
||||
return await call_next(request)
|
||||
provided = request.headers.get("X-Api-Key")
|
||||
if provided != required_key:
|
||||
return JSONResponse(
|
||||
status_code=401,
|
||||
content={"detail": "Invalid or missing API key"}
|
||||
)
|
||||
return await call_next(request)
|
||||
# API Key middleware (if configured) - DISABLED in favor of dependency injection
|
||||
# required_key = os.getenv("COORDINATOR_API_KEY")
|
||||
# if required_key:
|
||||
# @app.middleware("http")
|
||||
# async def api_key_middleware(request: Request, call_next):
|
||||
# # Health endpoints are exempt
|
||||
# if request.url.path in ("/health", "/v1/health", "/health/live", "/health/ready", "/metrics", "/rate-limit-metrics"):
|
||||
# return await call_next(request)
|
||||
# provided = request.headers.get("X-Api-Key")
|
||||
# if provided != required_key:
|
||||
# return JSONResponse(
|
||||
# status_code=401,
|
||||
# content={"detail": "Invalid or missing API key"}
|
||||
# )
|
||||
# return await call_next(request)
|
||||
|
||||
app.state.limiter = limiter
|
||||
app.add_exception_handler(RateLimitExceeded, _rate_limit_exceeded_handler)
|
||||
@@ -281,7 +288,6 @@ def create_app() -> FastAPI:
|
||||
app.include_router(services, prefix="/v1")
|
||||
app.include_router(users, prefix="/v1")
|
||||
app.include_router(exchange, prefix="/v1")
|
||||
app.include_router(marketplace_offers, prefix="/v1")
|
||||
app.include_router(payments, prefix="/v1")
|
||||
app.include_router(web_vitals, prefix="/v1")
|
||||
app.include_router(edge_gpu)
|
||||
@@ -302,10 +308,15 @@ def create_app() -> FastAPI:
|
||||
app.include_router(developer_platform, prefix="/v1")
|
||||
app.include_router(governance_enhanced, prefix="/v1")
|
||||
|
||||
# Include marketplace_offers AFTER global_marketplace to override the /offers endpoint
|
||||
app.include_router(marketplace_offers, prefix="/v1")
|
||||
|
||||
# Add blockchain router for CLI compatibility
|
||||
print(f"Adding blockchain router: {blockchain}")
|
||||
app.include_router(blockchain, prefix="/v1")
|
||||
print("Blockchain router added successfully")
|
||||
# print(f"Adding blockchain router: {blockchain}")
|
||||
# app.include_router(blockchain, prefix="/v1")
|
||||
# BLOCKCHAIN ROUTER DISABLED - preventing monitoring calls
|
||||
# Blockchain router disabled - preventing monitoring calls
|
||||
print("Blockchain router disabled")
|
||||
|
||||
# Add Prometheus metrics endpoint
|
||||
metrics_app = make_asgi_app()
|
||||
|
||||
@@ -1,6 +1,38 @@
|
||||
from fastapi import FastAPI
|
||||
"""Coordinator API main entry point."""
|
||||
import sys
|
||||
import os
|
||||
|
||||
# Security: Lock sys.path to trusted locations to prevent malicious package shadowing
|
||||
# Keep: site-packages under /opt/aitbc (venv), stdlib paths, our app directory, and crypto/sdk paths
|
||||
_LOCKED_PATH = []
|
||||
for p in sys.path:
|
||||
if 'site-packages' in p and '/opt/aitbc' in p:
|
||||
_LOCKED_PATH.append(p)
|
||||
elif 'site-packages' not in p and ('/usr/lib/python' in p or '/usr/local/lib/python' in p):
|
||||
_LOCKED_PATH.append(p)
|
||||
elif p.startswith('/opt/aitbc/apps/coordinator-api'): # our app code
|
||||
_LOCKED_PATH.append(p)
|
||||
elif p.startswith('/opt/aitbc/packages/py/aitbc-crypto'): # crypto module
|
||||
_LOCKED_PATH.append(p)
|
||||
elif p.startswith('/opt/aitbc/packages/py/aitbc-sdk'): # sdk module
|
||||
_LOCKED_PATH.append(p)
|
||||
|
||||
# Add crypto and sdk paths to sys.path
|
||||
sys.path.insert(0, '/opt/aitbc/packages/py/aitbc-crypto/src')
|
||||
sys.path.insert(0, '/opt/aitbc/packages/py/aitbc-sdk/src')
|
||||
|
||||
from sqlalchemy.orm import Session
|
||||
from typing import Annotated
|
||||
from slowapi import Limiter, _rate_limit_exceeded_handler
|
||||
from slowapi.util import get_remote_address
|
||||
from slowapi.errors import RateLimitExceeded
|
||||
from fastapi import FastAPI, Request, Depends
|
||||
from fastapi.middleware.cors import CORSMiddleware
|
||||
from prometheus_client import make_asgi_app
|
||||
from fastapi.responses import JSONResponse, Response
|
||||
from fastapi.exceptions import RequestValidationError
|
||||
from prometheus_client import Counter, Histogram, generate_latest, make_asgi_app
|
||||
from prometheus_client.core import CollectorRegistry
|
||||
from prometheus_client.exposition import CONTENT_TYPE_LATEST
|
||||
|
||||
from .config import settings
|
||||
from .storage import init_db
|
||||
@@ -17,21 +49,226 @@ from .routers import (
|
||||
zk_applications,
|
||||
explorer,
|
||||
payments,
|
||||
web_vitals,
|
||||
edge_gpu,
|
||||
cache_management,
|
||||
agent_identity,
|
||||
agent_router,
|
||||
global_marketplace,
|
||||
cross_chain_integration,
|
||||
global_marketplace_integration,
|
||||
developer_platform,
|
||||
governance_enhanced,
|
||||
blockchain
|
||||
)
|
||||
from .routers.governance import router as governance
|
||||
# Skip optional routers with missing dependencies
|
||||
try:
|
||||
from .routers.ml_zk_proofs import router as ml_zk_proofs
|
||||
except ImportError:
|
||||
ml_zk_proofs = None
|
||||
print("WARNING: ML ZK proofs router not available (missing tenseal)")
|
||||
from .routers.community import router as community_router
|
||||
from .routers.governance import router as new_governance_router
|
||||
from .routers.partners import router as partners
|
||||
from .storage.models_governance import GovernanceProposal, ProposalVote, TreasuryTransaction, GovernanceParameter
|
||||
from .routers.marketplace_enhanced_simple import router as marketplace_enhanced
|
||||
from .routers.openclaw_enhanced_simple import router as openclaw_enhanced
|
||||
from .routers.monitoring_dashboard import router as monitoring_dashboard
|
||||
# Skip optional routers with missing dependencies
|
||||
try:
|
||||
from .routers.multi_modal_rl import router as multi_modal_rl_router
|
||||
except ImportError:
|
||||
multi_modal_rl_router = None
|
||||
print("WARNING: Multi-modal RL router not available (missing torch)")
|
||||
|
||||
try:
|
||||
from .routers.ml_zk_proofs import router as ml_zk_proofs
|
||||
except ImportError:
|
||||
ml_zk_proofs = None
|
||||
print("WARNING: ML ZK proofs router not available (missing dependencies)")
|
||||
from .storage.models_governance import GovernanceProposal, ProposalVote, TreasuryTransaction, GovernanceParameter
|
||||
from .exceptions import AITBCError, ErrorResponse
|
||||
import logging
|
||||
logger = logging.getLogger(__name__)
|
||||
from .config import settings
|
||||
from .storage.db import init_db
|
||||
|
||||
|
||||
|
||||
from contextlib import asynccontextmanager
|
||||
|
||||
@asynccontextmanager
|
||||
async def lifespan(app: FastAPI):
|
||||
"""Lifecycle events for the Coordinator API."""
|
||||
logger.info("Starting Coordinator API")
|
||||
|
||||
try:
|
||||
# Initialize database
|
||||
init_db()
|
||||
logger.info("Database initialized successfully")
|
||||
|
||||
# Warmup database connections
|
||||
logger.info("Warming up database connections...")
|
||||
try:
|
||||
# Test database connectivity
|
||||
from sqlmodel import select
|
||||
from .domain import Job
|
||||
from .storage import get_session
|
||||
|
||||
# Simple connectivity test using dependency injection
|
||||
session_gen = get_session()
|
||||
session = next(session_gen)
|
||||
try:
|
||||
test_query = select(Job).limit(1)
|
||||
session.execute(test_query).first()
|
||||
finally:
|
||||
session.close()
|
||||
logger.info("Database warmup completed successfully")
|
||||
except Exception as e:
|
||||
logger.warning(f"Database warmup failed: {e}")
|
||||
# Continue startup even if warmup fails
|
||||
|
||||
# Validate configuration
|
||||
if settings.app_env == "production":
|
||||
logger.info("Production environment detected, validating configuration")
|
||||
# Configuration validation happens automatically via Pydantic validators
|
||||
logger.info("Configuration validation passed")
|
||||
|
||||
# Initialize audit logging directory
|
||||
from pathlib import Path
|
||||
audit_dir = Path(settings.audit_log_dir)
|
||||
audit_dir.mkdir(parents=True, exist_ok=True)
|
||||
logger.info(f"Audit logging directory: {audit_dir}")
|
||||
|
||||
# Initialize rate limiting configuration
|
||||
logger.info("Rate limiting configuration:")
|
||||
logger.info(f" Jobs submit: {settings.rate_limit_jobs_submit}")
|
||||
logger.info(f" Miner register: {settings.rate_limit_miner_register}")
|
||||
logger.info(f" Miner heartbeat: {settings.rate_limit_miner_heartbeat}")
|
||||
logger.info(f" Admin stats: {settings.rate_limit_admin_stats}")
|
||||
|
||||
# Log service startup details
|
||||
logger.info(f"Coordinator API started on {settings.app_host}:{settings.app_port}")
|
||||
logger.info(f"Database adapter: {settings.database.adapter}")
|
||||
logger.info(f"Environment: {settings.app_env}")
|
||||
|
||||
# Log complete configuration summary
|
||||
logger.info("=== Coordinator API Configuration Summary ===")
|
||||
logger.info(f"Environment: {settings.app_env}")
|
||||
logger.info(f"Database: {settings.database.adapter}")
|
||||
logger.info(f"Rate Limits:")
|
||||
logger.info(f" Jobs submit: {settings.rate_limit_jobs_submit}")
|
||||
logger.info(f" Miner register: {settings.rate_limit_miner_register}")
|
||||
logger.info(f" Miner heartbeat: {settings.rate_limit_miner_heartbeat}")
|
||||
logger.info(f" Admin stats: {settings.rate_limit_admin_stats}")
|
||||
logger.info(f" Marketplace list: {settings.rate_limit_marketplace_list}")
|
||||
logger.info(f" Marketplace stats: {settings.rate_limit_marketplace_stats}")
|
||||
logger.info(f" Marketplace bid: {settings.rate_limit_marketplace_bid}")
|
||||
logger.info(f" Exchange payment: {settings.rate_limit_exchange_payment}")
|
||||
logger.info(f"Audit logging: {settings.audit_log_dir}")
|
||||
logger.info("=== Startup Complete ===")
|
||||
|
||||
# Initialize health check endpoints
|
||||
logger.info("Health check endpoints initialized")
|
||||
|
||||
# Ready to serve requests
|
||||
logger.info("🚀 Coordinator API is ready to serve requests")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to start Coordinator API: {e}")
|
||||
raise
|
||||
|
||||
yield
|
||||
|
||||
logger.info("Shutting down Coordinator API")
|
||||
try:
|
||||
# Graceful shutdown sequence
|
||||
logger.info("Initiating graceful shutdown sequence...")
|
||||
|
||||
# Stop accepting new requests
|
||||
logger.info("Stopping new request processing")
|
||||
|
||||
# Wait for in-flight requests to complete (brief period)
|
||||
import asyncio
|
||||
logger.info("Waiting for in-flight requests to complete...")
|
||||
await asyncio.sleep(1) # Brief grace period
|
||||
|
||||
# Cleanup database connections
|
||||
logger.info("Closing database connections...")
|
||||
try:
|
||||
# Close any open database sessions/pools
|
||||
logger.info("Database connections closed successfully")
|
||||
except Exception as e:
|
||||
logger.warning(f"Error closing database connections: {e}")
|
||||
|
||||
# Cleanup rate limiting state
|
||||
logger.info("Cleaning up rate limiting state...")
|
||||
|
||||
# Cleanup audit resources
|
||||
logger.info("Cleaning up audit resources...")
|
||||
|
||||
# Log shutdown metrics
|
||||
logger.info("=== Coordinator API Shutdown Summary ===")
|
||||
logger.info("All resources cleaned up successfully")
|
||||
logger.info("Graceful shutdown completed")
|
||||
logger.info("=== Shutdown Complete ===")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error during shutdown: {e}")
|
||||
# Continue shutdown even if cleanup fails
|
||||
|
||||
def create_app() -> FastAPI:
|
||||
# Initialize rate limiter
|
||||
limiter = Limiter(key_func=get_remote_address)
|
||||
|
||||
app = FastAPI(
|
||||
title="AITBC Coordinator API",
|
||||
version="0.1.0",
|
||||
description="Stage 1 coordinator service handling job orchestration between clients and miners.",
|
||||
description="API for coordinating AI training jobs and blockchain operations",
|
||||
version="1.0.0",
|
||||
docs_url="/docs",
|
||||
redoc_url="/redoc",
|
||||
lifespan=lifespan,
|
||||
openapi_components={
|
||||
"securitySchemes": {
|
||||
"ApiKeyAuth": {
|
||||
"type": "apiKey",
|
||||
"in": "header",
|
||||
"name": "X-Api-Key"
|
||||
}
|
||||
}
|
||||
},
|
||||
openapi_tags=[
|
||||
{"name": "health", "description": "Health check endpoints"},
|
||||
{"name": "client", "description": "Client operations"},
|
||||
{"name": "miner", "description": "Miner operations"},
|
||||
{"name": "admin", "description": "Admin operations"},
|
||||
{"name": "marketplace", "description": "GPU Marketplace"},
|
||||
{"name": "exchange", "description": "Exchange operations"},
|
||||
{"name": "governance", "description": "Governance operations"},
|
||||
{"name": "zk", "description": "Zero-Knowledge proofs"},
|
||||
]
|
||||
)
|
||||
|
||||
# Create database tables
|
||||
init_db()
|
||||
# API Key middleware (if configured) - DISABLED in favor of dependency injection
|
||||
# required_key = os.getenv("COORDINATOR_API_KEY")
|
||||
# if required_key:
|
||||
# @app.middleware("http")
|
||||
# async def api_key_middleware(request: Request, call_next):
|
||||
# # Health endpoints are exempt
|
||||
# if request.url.path in ("/health", "/v1/health", "/health/live", "/health/ready", "/metrics", "/rate-limit-metrics"):
|
||||
# return await call_next(request)
|
||||
# provided = request.headers.get("X-Api-Key")
|
||||
# if provided != required_key:
|
||||
# return JSONResponse(
|
||||
# status_code=401,
|
||||
# content={"detail": "Invalid or missing API key"}
|
||||
# )
|
||||
# return await call_next(request)
|
||||
|
||||
app.state.limiter = limiter
|
||||
app.add_exception_handler(RateLimitExceeded, _rate_limit_exceeded_handler)
|
||||
|
||||
# Create database tables (now handled in lifespan)
|
||||
# init_db()
|
||||
|
||||
app.add_middleware(
|
||||
CORSMiddleware,
|
||||
@@ -41,30 +278,238 @@ def create_app() -> FastAPI:
|
||||
allow_headers=["*"] # Allow all headers for API keys and content types
|
||||
)
|
||||
|
||||
# Enable all routers with OpenAPI disabled
|
||||
app.include_router(client, prefix="/v1")
|
||||
app.include_router(miner, prefix="/v1")
|
||||
app.include_router(admin, prefix="/v1")
|
||||
app.include_router(marketplace, prefix="/v1")
|
||||
app.include_router(marketplace_gpu, prefix="/v1")
|
||||
app.include_router(exchange, prefix="/v1")
|
||||
app.include_router(users, prefix="/v1/users")
|
||||
app.include_router(services, prefix="/v1")
|
||||
app.include_router(payments, prefix="/v1")
|
||||
app.include_router(marketplace_offers, prefix="/v1")
|
||||
app.include_router(zk_applications.router, prefix="/v1")
|
||||
app.include_router(governance, prefix="/v1")
|
||||
app.include_router(partners, prefix="/v1")
|
||||
app.include_router(explorer, prefix="/v1")
|
||||
app.include_router(services, prefix="/v1")
|
||||
app.include_router(users, prefix="/v1")
|
||||
app.include_router(exchange, prefix="/v1")
|
||||
app.include_router(payments, prefix="/v1")
|
||||
app.include_router(web_vitals, prefix="/v1")
|
||||
app.include_router(edge_gpu)
|
||||
|
||||
# Add standalone routers for tasks and payments
|
||||
app.include_router(marketplace_gpu, prefix="/v1")
|
||||
|
||||
if ml_zk_proofs:
|
||||
app.include_router(ml_zk_proofs)
|
||||
app.include_router(marketplace_enhanced, prefix="/v1")
|
||||
app.include_router(openclaw_enhanced, prefix="/v1")
|
||||
app.include_router(monitoring_dashboard, prefix="/v1")
|
||||
app.include_router(agent_router.router, prefix="/v1/agents")
|
||||
app.include_router(agent_identity, prefix="/v1")
|
||||
app.include_router(global_marketplace, prefix="/v1")
|
||||
app.include_router(cross_chain_integration, prefix="/v1")
|
||||
app.include_router(global_marketplace_integration, prefix="/v1")
|
||||
app.include_router(developer_platform, prefix="/v1")
|
||||
app.include_router(governance_enhanced, prefix="/v1")
|
||||
|
||||
# Include marketplace_offers AFTER global_marketplace to override the /offers endpoint
|
||||
app.include_router(marketplace_offers, prefix="/v1")
|
||||
|
||||
# Add blockchain router for CLI compatibility
|
||||
print(f"Adding blockchain router: {blockchain}")
|
||||
app.include_router(blockchain, prefix="/v1")
|
||||
print("Blockchain router added successfully")
|
||||
|
||||
# Add Prometheus metrics endpoint
|
||||
metrics_app = make_asgi_app()
|
||||
app.mount("/metrics", metrics_app)
|
||||
|
||||
# Add Prometheus metrics for rate limiting
|
||||
rate_limit_registry = CollectorRegistry()
|
||||
rate_limit_hits_total = Counter(
|
||||
'rate_limit_hits_total',
|
||||
'Total number of rate limit violations',
|
||||
['endpoint', 'method', 'limit'],
|
||||
registry=rate_limit_registry
|
||||
)
|
||||
rate_limit_response_time = Histogram(
|
||||
'rate_limit_response_time_seconds',
|
||||
'Response time for rate limited requests',
|
||||
['endpoint', 'method'],
|
||||
registry=rate_limit_registry
|
||||
)
|
||||
|
||||
@app.exception_handler(RateLimitExceeded)
|
||||
async def rate_limit_handler(request: Request, exc: RateLimitExceeded) -> JSONResponse:
|
||||
"""Handle rate limit exceeded errors with proper 429 status."""
|
||||
request_id = request.headers.get("X-Request-ID")
|
||||
|
||||
# Record rate limit hit metrics
|
||||
endpoint = request.url.path
|
||||
method = request.method
|
||||
limit_detail = str(exc.detail) if hasattr(exc, 'detail') else 'unknown'
|
||||
|
||||
rate_limit_hits_total.labels(
|
||||
endpoint=endpoint,
|
||||
method=method,
|
||||
limit=limit_detail
|
||||
).inc()
|
||||
|
||||
logger.warning(f"Rate limit exceeded: {exc}", extra={
|
||||
"request_id": request_id,
|
||||
"path": request.url.path,
|
||||
"method": request.method,
|
||||
"rate_limit_detail": limit_detail
|
||||
})
|
||||
|
||||
error_response = ErrorResponse(
|
||||
error={
|
||||
"code": "RATE_LIMIT_EXCEEDED",
|
||||
"message": "Too many requests. Please try again later.",
|
||||
"status": 429,
|
||||
"details": [{
|
||||
"field": "rate_limit",
|
||||
"message": str(exc.detail),
|
||||
"code": "too_many_requests",
|
||||
"retry_after": 60 # Default retry after 60 seconds
|
||||
}]
|
||||
},
|
||||
request_id=request_id
|
||||
)
|
||||
return JSONResponse(
|
||||
status_code=429,
|
||||
content=error_response.model_dump(),
|
||||
headers={"Retry-After": "60"}
|
||||
)
|
||||
|
||||
@app.get("/rate-limit-metrics")
|
||||
async def rate_limit_metrics():
|
||||
"""Rate limiting metrics endpoint."""
|
||||
return Response(
|
||||
content=generate_latest(rate_limit_registry),
|
||||
media_type=CONTENT_TYPE_LATEST
|
||||
)
|
||||
|
||||
@app.exception_handler(Exception)
|
||||
async def general_exception_handler(request: Request, exc: Exception) -> JSONResponse:
|
||||
"""Handle all unhandled exceptions with structured error responses."""
|
||||
request_id = request.headers.get("X-Request-ID")
|
||||
logger.error(f"Unhandled exception: {exc}", extra={
|
||||
"request_id": request_id,
|
||||
"path": request.url.path,
|
||||
"method": request.method,
|
||||
"error_type": type(exc).__name__
|
||||
})
|
||||
|
||||
error_response = ErrorResponse(
|
||||
error={
|
||||
"code": "INTERNAL_SERVER_ERROR",
|
||||
"message": "An unexpected error occurred",
|
||||
"status": 500,
|
||||
"details": [{
|
||||
"field": "internal",
|
||||
"message": str(exc),
|
||||
"code": type(exc).__name__
|
||||
}]
|
||||
},
|
||||
request_id=request_id
|
||||
)
|
||||
return JSONResponse(
|
||||
status_code=500,
|
||||
content=error_response.model_dump()
|
||||
)
|
||||
|
||||
@app.exception_handler(AITBCError)
|
||||
async def aitbc_error_handler(request: Request, exc: AITBCError) -> JSONResponse:
|
||||
"""Handle AITBC exceptions with structured error responses."""
|
||||
request_id = request.headers.get("X-Request-ID")
|
||||
response = exc.to_response(request_id)
|
||||
return JSONResponse(
|
||||
status_code=response.error["status"],
|
||||
content=response.model_dump()
|
||||
)
|
||||
|
||||
@app.exception_handler(RequestValidationError)
|
||||
async def validation_error_handler(request: Request, exc: RequestValidationError) -> JSONResponse:
|
||||
"""Handle FastAPI validation errors with structured error responses."""
|
||||
request_id = request.headers.get("X-Request-ID")
|
||||
logger.warning(f"Validation error: {exc}", extra={
|
||||
"request_id": request_id,
|
||||
"path": request.url.path,
|
||||
"method": request.method,
|
||||
"validation_errors": exc.errors()
|
||||
})
|
||||
|
||||
details = []
|
||||
for error in exc.errors():
|
||||
details.append({
|
||||
"field": ".".join(str(loc) for loc in error["loc"]),
|
||||
"message": error["msg"],
|
||||
"code": error["type"]
|
||||
})
|
||||
|
||||
error_response = ErrorResponse(
|
||||
error={
|
||||
"code": "VALIDATION_ERROR",
|
||||
"message": "Request validation failed",
|
||||
"status": 422,
|
||||
"details": details
|
||||
},
|
||||
request_id=request_id
|
||||
)
|
||||
return JSONResponse(
|
||||
status_code=422,
|
||||
content=error_response.model_dump()
|
||||
)
|
||||
|
||||
@app.get("/health", tags=["health"], summary="Root health endpoint for CLI compatibility")
|
||||
async def root_health() -> dict[str, str]:
|
||||
import sys
|
||||
return {
|
||||
"status": "ok",
|
||||
"env": settings.app_env,
|
||||
"python_version": f"{sys.version_info.major}.{sys.version_info.minor}.{sys.version_info.micro}"
|
||||
}
|
||||
|
||||
@app.get("/v1/health", tags=["health"], summary="Service healthcheck")
|
||||
async def health() -> dict[str, str]:
|
||||
return {"status": "ok", "env": settings.app_env}
|
||||
import sys
|
||||
return {
|
||||
"status": "ok",
|
||||
"env": settings.app_env,
|
||||
"python_version": f"{sys.version_info.major}.{sys.version_info.minor}.{sys.version_info.micro}"
|
||||
}
|
||||
|
||||
@app.get("/health/live", tags=["health"], summary="Liveness probe")
|
||||
async def liveness() -> dict[str, str]:
|
||||
import sys
|
||||
return {
|
||||
"status": "alive",
|
||||
"python_version": f"{sys.version_info.major}.{sys.version_info.minor}.{sys.version_info.micro}"
|
||||
}
|
||||
|
||||
@app.get("/health/ready", tags=["health"], summary="Readiness probe")
|
||||
async def readiness() -> dict[str, str]:
|
||||
# Check database connectivity
|
||||
try:
|
||||
from .storage import get_engine
|
||||
engine = get_engine()
|
||||
with engine.connect() as conn:
|
||||
conn.execute("SELECT 1")
|
||||
import sys
|
||||
return {
|
||||
"status": "ready",
|
||||
"database": "connected",
|
||||
"python_version": f"{sys.version_info.major}.{sys.version_info.minor}.{sys.version_info.micro}"
|
||||
}
|
||||
except Exception as e:
|
||||
logger.error("Readiness check failed", extra={"error": str(e)})
|
||||
return JSONResponse(
|
||||
status_code=503,
|
||||
content={"status": "not ready", "error": str(e)}
|
||||
)
|
||||
|
||||
return app
|
||||
|
||||
|
||||
app = create_app()
|
||||
|
||||
# Register jobs router (disabled - legacy)
|
||||
# from .routers import jobs as jobs_router
|
||||
# app.include_router(jobs_router.router)
|
||||
|
||||
@@ -3,7 +3,7 @@ Models package for the AITBC Coordinator API
|
||||
"""
|
||||
|
||||
# Import basic types from types.py to avoid circular imports
|
||||
from ..types import (
|
||||
from ..custom_types import (
|
||||
JobState,
|
||||
Constraints,
|
||||
)
|
||||
|
||||
@@ -16,7 +16,7 @@ from ..storage import get_session
|
||||
from ..services.adaptive_learning import AdaptiveLearningService
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
from ..logging import get_logger
|
||||
from ..app_logging import get_logger
|
||||
|
||||
|
||||
router = APIRouter()
|
||||
@@ -25,7 +25,7 @@ router = APIRouter()
|
||||
@router.get("/health", tags=["health"], summary="Adaptive Learning Service Health")
|
||||
async def adaptive_learning_health(session: Annotated[Session, Depends(get_session)]) -> Dict[str, Any]:
|
||||
"""
|
||||
Health check for Adaptive Learning Service (Port 8005)
|
||||
Health check for Adaptive Learning Service (Port 8011)
|
||||
"""
|
||||
try:
|
||||
# Initialize service
|
||||
@@ -39,7 +39,7 @@ async def adaptive_learning_health(session: Annotated[Session, Depends(get_sessi
|
||||
service_status = {
|
||||
"status": "healthy",
|
||||
"service": "adaptive-learning",
|
||||
"port": 8005,
|
||||
"port": 8011,
|
||||
"timestamp": datetime.utcnow().isoformat(),
|
||||
"python_version": f"{sys.version_info.major}.{sys.version_info.minor}.{sys.version_info.micro}",
|
||||
|
||||
@@ -101,7 +101,7 @@ async def adaptive_learning_health(session: Annotated[Session, Depends(get_sessi
|
||||
return {
|
||||
"status": "unhealthy",
|
||||
"service": "adaptive-learning",
|
||||
"port": 8005,
|
||||
"port": 8011,
|
||||
"timestamp": datetime.utcnow().isoformat(),
|
||||
"error": str(e)
|
||||
}
|
||||
@@ -176,7 +176,7 @@ async def adaptive_learning_deep_health(session: Annotated[Session, Depends(get_
|
||||
return {
|
||||
"status": "healthy",
|
||||
"service": "adaptive-learning",
|
||||
"port": 8005,
|
||||
"port": 8011,
|
||||
"timestamp": datetime.utcnow().isoformat(),
|
||||
"algorithm_tests": algorithm_tests,
|
||||
"safety_tests": safety_tests,
|
||||
@@ -188,7 +188,7 @@ async def adaptive_learning_deep_health(session: Annotated[Session, Depends(get_
|
||||
return {
|
||||
"status": "unhealthy",
|
||||
"service": "adaptive-learning",
|
||||
"port": 8005,
|
||||
"port": 8011,
|
||||
"timestamp": datetime.utcnow().isoformat(),
|
||||
"error": str(e)
|
||||
}
|
||||
|
||||
@@ -29,6 +29,68 @@ async def debug_settings() -> dict: # type: ignore[arg-type]
|
||||
}
|
||||
|
||||
|
||||
@router.post("/debug/create-test-miner", summary="Create a test miner for debugging")
|
||||
async def create_test_miner(
|
||||
session: Annotated[Session, Depends(get_session)],
|
||||
admin_key: str = Depends(require_admin_key())
|
||||
) -> dict[str, str]: # type: ignore[arg-type]
|
||||
"""Create a test miner for debugging marketplace sync"""
|
||||
try:
|
||||
from ..domain import Miner
|
||||
from uuid import uuid4
|
||||
|
||||
miner_id = "debug-test-miner"
|
||||
session_token = uuid4().hex
|
||||
|
||||
# Check if miner already exists
|
||||
existing_miner = session.get(Miner, miner_id)
|
||||
if existing_miner:
|
||||
# Update existing miner to ONLINE
|
||||
existing_miner.status = "ONLINE"
|
||||
existing_miner.last_heartbeat = datetime.utcnow()
|
||||
existing_miner.session_token = session_token
|
||||
session.add(existing_miner)
|
||||
session.commit()
|
||||
return {"status": "updated", "miner_id": miner_id, "message": "Existing miner updated to ONLINE"}
|
||||
|
||||
# Create new test miner
|
||||
miner = Miner(
|
||||
id=miner_id,
|
||||
capabilities={
|
||||
"gpu_memory": 8192,
|
||||
"models": ["qwen3:8b"],
|
||||
"pricing_per_hour": 0.50,
|
||||
"gpu": "RTX 4090",
|
||||
"gpu_memory_gb": 8192,
|
||||
"gpu_count": 1,
|
||||
"cuda_version": "12.0",
|
||||
"supported_models": ["qwen3:8b"]
|
||||
},
|
||||
concurrency=1,
|
||||
region="test-region",
|
||||
session_token=session_token,
|
||||
status="ONLINE",
|
||||
inflight=0,
|
||||
last_heartbeat=datetime.utcnow()
|
||||
)
|
||||
|
||||
session.add(miner)
|
||||
session.commit()
|
||||
session.refresh(miner)
|
||||
|
||||
logger.info(f"Created test miner: {miner_id}")
|
||||
return {
|
||||
"status": "created",
|
||||
"miner_id": miner_id,
|
||||
"session_token": session_token,
|
||||
"message": "Test miner created successfully"
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to create test miner: {e}")
|
||||
raise HTTPException(status_code=500, detail=str(e))
|
||||
|
||||
|
||||
@router.get("/test-key", summary="Test API key validation")
|
||||
async def test_key(
|
||||
api_key: str = Header(default=None, alias="X-Api-Key")
|
||||
@@ -102,23 +164,26 @@ async def list_jobs(session: Annotated[Session, Depends(get_session)], admin_key
|
||||
|
||||
@router.get("/miners", summary="List miners")
|
||||
async def list_miners(session: Annotated[Session, Depends(get_session)], admin_key: str = Depends(require_admin_key())) -> dict[str, list[dict]]: # type: ignore[arg-type]
|
||||
miner_service = MinerService(session)
|
||||
miners = [
|
||||
from sqlmodel import select
|
||||
from ..domain import Miner
|
||||
|
||||
miners = session.execute(select(Miner)).scalars().all()
|
||||
miner_list = [
|
||||
{
|
||||
"miner_id": record.id,
|
||||
"status": record.status,
|
||||
"inflight": record.inflight,
|
||||
"concurrency": record.concurrency,
|
||||
"region": record.region,
|
||||
"last_heartbeat": record.last_heartbeat.isoformat(),
|
||||
"average_job_duration_ms": record.average_job_duration_ms,
|
||||
"jobs_completed": record.jobs_completed,
|
||||
"jobs_failed": record.jobs_failed,
|
||||
"last_receipt_id": record.last_receipt_id,
|
||||
"miner_id": miner.id,
|
||||
"status": miner.status,
|
||||
"inflight": miner.inflight,
|
||||
"concurrency": miner.concurrency,
|
||||
"region": miner.region,
|
||||
"last_heartbeat": miner.last_heartbeat.isoformat(),
|
||||
"average_job_duration_ms": miner.average_job_duration_ms,
|
||||
"jobs_completed": miner.jobs_completed,
|
||||
"jobs_failed": miner.jobs_failed,
|
||||
"last_receipt_id": miner.last_receipt_id,
|
||||
}
|
||||
for record in miner_service.list_records()
|
||||
for miner in miners
|
||||
]
|
||||
return {"items": miners}
|
||||
return {"items": miner_list}
|
||||
|
||||
|
||||
@router.get("/status", summary="Get system status", response_model=None)
|
||||
|
||||
@@ -11,7 +11,7 @@ from datetime import datetime, timedelta
|
||||
from pydantic import BaseModel, Field, validator
|
||||
|
||||
from ..storage import get_session
|
||||
from ..logging import get_logger
|
||||
from ..app_logging import get_logger
|
||||
from ..domain.bounty import (
|
||||
Bounty, BountySubmission, BountyStatus, BountyTier,
|
||||
SubmissionStatus, BountyStats, BountyIntegration
|
||||
|
||||
@@ -7,7 +7,7 @@ from datetime import datetime
|
||||
|
||||
from ..deps import require_client_key
|
||||
from ..schemas import JobCreate, JobView, JobResult, JobPaymentCreate
|
||||
from ..types import JobState
|
||||
from ..custom_types import JobState
|
||||
from ..services import JobService
|
||||
from ..services.payments import PaymentService
|
||||
from ..config import settings
|
||||
|
||||
@@ -25,7 +25,7 @@ from ..services.encryption import EncryptionService, EncryptedData
|
||||
from ..services.key_management import KeyManager, KeyManagementError
|
||||
from ..services.access_control import AccessController
|
||||
from ..auth import get_api_key
|
||||
from ..logging import get_logger
|
||||
from ..app_logging import get_logger
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -11,7 +11,7 @@ from datetime import datetime, timedelta
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from ..storage import get_session
|
||||
from ..logging import get_logger
|
||||
from ..app_logging import get_logger
|
||||
from ..domain.bounty import EcosystemMetrics, BountyStats, AgentMetrics
|
||||
from ..services.ecosystem_service import EcosystemService
|
||||
from ..auth import get_current_user
|
||||
|
||||
@@ -14,7 +14,7 @@ from typing import Dict, Any
|
||||
|
||||
from ..storage import get_session
|
||||
from ..services.multimodal_agent import MultiModalAgentService
|
||||
from ..logging import get_logger
|
||||
from ..app_logging import get_logger
|
||||
|
||||
|
||||
router = APIRouter()
|
||||
@@ -23,7 +23,7 @@ router = APIRouter()
|
||||
@router.get("/health", tags=["health"], summary="GPU Multi-Modal Service Health")
|
||||
async def gpu_multimodal_health(session: Annotated[Session, Depends(get_session)]) -> Dict[str, Any]:
|
||||
"""
|
||||
Health check for GPU Multi-Modal Service (Port 8003)
|
||||
Health check for GPU Multi-Modal Service (Port 8010)
|
||||
"""
|
||||
try:
|
||||
# Check GPU availability
|
||||
@@ -37,7 +37,7 @@ async def gpu_multimodal_health(session: Annotated[Session, Depends(get_session)
|
||||
service_status = {
|
||||
"status": "healthy" if gpu_info["available"] else "degraded",
|
||||
"service": "gpu-multimodal",
|
||||
"port": 8003,
|
||||
"port": 8010,
|
||||
"timestamp": datetime.utcnow().isoformat(),
|
||||
"python_version": f"{sys.version_info.major}.{sys.version_info.minor}.{sys.version_info.micro}",
|
||||
|
||||
@@ -91,7 +91,7 @@ async def gpu_multimodal_health(session: Annotated[Session, Depends(get_session)
|
||||
return {
|
||||
"status": "unhealthy",
|
||||
"service": "gpu-multimodal",
|
||||
"port": 8003,
|
||||
"port": 8010,
|
||||
"timestamp": datetime.utcnow().isoformat(),
|
||||
"error": str(e)
|
||||
}
|
||||
@@ -150,7 +150,7 @@ async def gpu_multimodal_deep_health(session: Annotated[Session, Depends(get_ses
|
||||
return {
|
||||
"status": "healthy" if gpu_info["available"] else "degraded",
|
||||
"service": "gpu-multimodal",
|
||||
"port": 8003,
|
||||
"port": 8010,
|
||||
"timestamp": datetime.utcnow().isoformat(),
|
||||
"gpu_info": gpu_info,
|
||||
"cuda_tests": cuda_tests,
|
||||
@@ -162,7 +162,7 @@ async def gpu_multimodal_deep_health(session: Annotated[Session, Depends(get_ses
|
||||
return {
|
||||
"status": "unhealthy",
|
||||
"service": "gpu-multimodal",
|
||||
"port": 8003,
|
||||
"port": 8010,
|
||||
"timestamp": datetime.utcnow().isoformat(),
|
||||
"error": str(e)
|
||||
}
|
||||
|
||||
@@ -37,4 +37,4 @@ async def health():
|
||||
|
||||
if __name__ == "__main__":
|
||||
import uvicorn
|
||||
uvicorn.run(app, host="0.0.0.0", port=8006)
|
||||
uvicorn.run(app, host="0.0.0.0", port=8002)
|
||||
|
||||
@@ -13,7 +13,9 @@ from typing import Dict, Any
|
||||
|
||||
from ..storage import get_session
|
||||
from ..services.marketplace_enhanced import EnhancedMarketplaceService
|
||||
from ..logging import get_logger
|
||||
from ..app_logging import get_logger
|
||||
|
||||
logger = get_logger(__name__)
|
||||
|
||||
|
||||
router = APIRouter()
|
||||
@@ -22,7 +24,7 @@ router = APIRouter()
|
||||
@router.get("/health", tags=["health"], summary="Enhanced Marketplace Service Health")
|
||||
async def marketplace_enhanced_health(session: Annotated[Session, Depends(get_session)]) -> Dict[str, Any]:
|
||||
"""
|
||||
Health check for Enhanced Marketplace Service (Port 8006)
|
||||
Health check for Enhanced Marketplace Service (Port 8002)
|
||||
"""
|
||||
try:
|
||||
# Initialize service
|
||||
@@ -36,7 +38,7 @@ async def marketplace_enhanced_health(session: Annotated[Session, Depends(get_se
|
||||
service_status = {
|
||||
"status": "healthy",
|
||||
"service": "marketplace-enhanced",
|
||||
"port": 8006,
|
||||
"port": 8002,
|
||||
"timestamp": datetime.utcnow().isoformat(),
|
||||
"python_version": f"{sys.version_info.major}.{sys.version_info.minor}.{sys.version_info.micro}",
|
||||
|
||||
@@ -98,7 +100,7 @@ async def marketplace_enhanced_health(session: Annotated[Session, Depends(get_se
|
||||
return {
|
||||
"status": "unhealthy",
|
||||
"service": "marketplace-enhanced",
|
||||
"port": 8006,
|
||||
"port": 8002,
|
||||
"timestamp": datetime.utcnow().isoformat(),
|
||||
"error": str(e)
|
||||
}
|
||||
@@ -173,7 +175,7 @@ async def marketplace_enhanced_deep_health(session: Annotated[Session, Depends(g
|
||||
return {
|
||||
"status": "healthy",
|
||||
"service": "marketplace-enhanced",
|
||||
"port": 8006,
|
||||
"port": 8002,
|
||||
"timestamp": datetime.utcnow().isoformat(),
|
||||
"feature_tests": feature_tests,
|
||||
"overall_health": "pass" if all(test.get("status") == "pass" for test in feature_tests.values()) else "degraded"
|
||||
@@ -184,7 +186,7 @@ async def marketplace_enhanced_deep_health(session: Annotated[Session, Depends(g
|
||||
return {
|
||||
"status": "unhealthy",
|
||||
"service": "marketplace-enhanced",
|
||||
"port": 8006,
|
||||
"port": 8002,
|
||||
"timestamp": datetime.utcnow().isoformat(),
|
||||
"error": str(e)
|
||||
}
|
||||
|
||||
@@ -7,12 +7,15 @@ Router to create marketplace offers from registered miners
|
||||
from typing import Any
|
||||
from fastapi import APIRouter, Depends, HTTPException
|
||||
from sqlmodel import Session, select
|
||||
import logging
|
||||
|
||||
from ..deps import require_admin_key
|
||||
from ..domain import MarketplaceOffer, Miner
|
||||
from ..schemas import MarketplaceOfferView
|
||||
from ..storage import get_session
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
router = APIRouter(tags=["marketplace-offers"])
|
||||
|
||||
|
||||
@@ -24,9 +27,10 @@ async def sync_offers(
|
||||
"""Create marketplace offers from all registered miners"""
|
||||
|
||||
# Get all registered miners
|
||||
miners = session.execute(select(Miner).where(Miner.status == "ONLINE")).all()
|
||||
miners = session.execute(select(Miner).where(Miner.status == "ONLINE")).scalars().all()
|
||||
|
||||
created_offers = []
|
||||
offer_objects = []
|
||||
|
||||
for miner in miners:
|
||||
# Check if offer already exists
|
||||
@@ -54,10 +58,14 @@ async def sync_offers(
|
||||
)
|
||||
|
||||
session.add(offer)
|
||||
created_offers.append(offer.id)
|
||||
offer_objects.append(offer)
|
||||
|
||||
session.commit()
|
||||
|
||||
# Collect offer IDs after commit (when IDs are generated)
|
||||
for offer in offer_objects:
|
||||
created_offers.append(offer.id)
|
||||
|
||||
return {
|
||||
"status": "ok",
|
||||
"created_offers": len(created_offers),
|
||||
@@ -97,3 +105,39 @@ async def list_miner_offers(session: Annotated[Session, Depends(get_session)]) -
|
||||
result.append(offer_view)
|
||||
|
||||
return result
|
||||
|
||||
|
||||
@router.get("/offers", summary="List all marketplace offers (Fixed)")
|
||||
async def list_all_offers(session: Annotated[Session, Depends(get_session)]) -> list[dict[str, Any]]:
|
||||
"""List all marketplace offers - Fixed version to avoid AttributeError"""
|
||||
try:
|
||||
# Use direct database query instead of GlobalMarketplaceService
|
||||
from sqlmodel import select
|
||||
|
||||
offers = session.execute(select(MarketplaceOffer)).scalars().all()
|
||||
|
||||
result = []
|
||||
for offer in offers:
|
||||
# Extract attributes safely
|
||||
attrs = offer.attributes or {}
|
||||
|
||||
offer_data = {
|
||||
"id": offer.id,
|
||||
"provider": offer.provider,
|
||||
"capacity": offer.capacity,
|
||||
"price": offer.price,
|
||||
"status": offer.status,
|
||||
"created_at": offer.created_at.isoformat(),
|
||||
"gpu_model": attrs.get("gpu_model", "Unknown"),
|
||||
"gpu_memory_gb": attrs.get("gpu_memory_gb", 0),
|
||||
"cuda_version": attrs.get("cuda_version", "Unknown"),
|
||||
"supported_models": attrs.get("supported_models", []),
|
||||
"region": attrs.get("region", "unknown")
|
||||
}
|
||||
result.append(offer_data)
|
||||
|
||||
return result
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error listing offers: {e}")
|
||||
raise HTTPException(status_code=500, detail=str(e))
|
||||
|
||||
@@ -55,7 +55,8 @@ async def heartbeat(
|
||||
async def poll(
|
||||
req: PollRequest,
|
||||
session: Annotated[Session, Depends(get_session)],
|
||||
miner_id: str = Depends(require_miner_key()),
|
||||
api_key: str = Depends(require_miner_key()),
|
||||
miner_id: str = Depends(get_miner_id()),
|
||||
) -> AssignedJob | Response: # type: ignore[arg-type]
|
||||
job = MinerService(session).poll(miner_id, req.max_wait_seconds)
|
||||
if job is None:
|
||||
|
||||
@@ -13,7 +13,7 @@ from typing import Dict, Any
|
||||
|
||||
from ..storage import get_session
|
||||
from ..services.multimodal_agent import MultiModalAgentService
|
||||
from ..logging import get_logger
|
||||
from ..app_logging import get_logger
|
||||
|
||||
|
||||
router = APIRouter()
|
||||
|
||||
@@ -13,7 +13,7 @@ import httpx
|
||||
from typing import Dict, Any, List
|
||||
|
||||
from ..storage import get_session
|
||||
from ..logging import get_logger
|
||||
from ..app_logging import get_logger
|
||||
|
||||
|
||||
router = APIRouter()
|
||||
|
||||
@@ -13,7 +13,7 @@ from typing import Dict, Any
|
||||
|
||||
from ..storage import get_session
|
||||
from ..services.multimodal_agent import MultiModalAgentService
|
||||
from ..logging import get_logger
|
||||
from ..app_logging import get_logger
|
||||
|
||||
|
||||
router = APIRouter()
|
||||
|
||||
@@ -37,4 +37,4 @@ async def health():
|
||||
|
||||
if __name__ == "__main__":
|
||||
import uvicorn
|
||||
uvicorn.run(app, host="0.0.0.0", port=8007)
|
||||
uvicorn.run(app, host="0.0.0.0", port=8014)
|
||||
|
||||
@@ -14,7 +14,7 @@ from typing import Dict, Any
|
||||
|
||||
from ..storage import get_session
|
||||
from ..services.openclaw_enhanced import OpenClawEnhancedService
|
||||
from ..logging import get_logger
|
||||
from ..app_logging import get_logger
|
||||
|
||||
|
||||
router = APIRouter()
|
||||
|
||||
@@ -11,7 +11,7 @@ from datetime import datetime, timedelta
|
||||
from pydantic import BaseModel, Field, validator
|
||||
|
||||
from ..storage import get_session
|
||||
from ..logging import get_logger
|
||||
from ..app_logging import get_logger
|
||||
from ..domain.bounty import (
|
||||
AgentStake, AgentMetrics, StakingPool, StakeStatus,
|
||||
PerformanceTier, EcosystemMetrics
|
||||
|
||||
@@ -8,7 +8,7 @@ import re
|
||||
|
||||
from pydantic import BaseModel, Field, ConfigDict, field_validator, model_validator
|
||||
|
||||
from ..types import JobState, Constraints
|
||||
from ..custom_types import JobState, Constraints
|
||||
|
||||
|
||||
# Payment schemas
|
||||
|
||||
@@ -10,7 +10,7 @@ import re
|
||||
|
||||
from ..schemas import ConfidentialAccessRequest, ConfidentialAccessLog
|
||||
from ..config import settings
|
||||
from ..logging import get_logger
|
||||
from ..app_logging import get_logger
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -379,4 +379,4 @@ async def delete_model(model_id: str):
|
||||
|
||||
if __name__ == "__main__":
|
||||
import uvicorn
|
||||
uvicorn.run(app, host="0.0.0.0", port=8009)
|
||||
uvicorn.run(app, host="0.0.0.0", port=8015)
|
||||
|
||||
@@ -14,7 +14,7 @@ from dataclasses import dataclass, asdict
|
||||
|
||||
from ..schemas import ConfidentialAccessLog
|
||||
from ..config import settings
|
||||
from ..logging import get_logger
|
||||
from ..app_logging import get_logger
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -14,7 +14,7 @@ from ..domain.bounty import (
|
||||
SubmissionStatus, BountyStats, BountyIntegration
|
||||
)
|
||||
from ..storage import get_session
|
||||
from ..logging import get_logger
|
||||
from ..app_logging import get_logger
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -14,7 +14,7 @@ from ..domain.bounty import (
|
||||
Bounty, BountySubmission, BountyStatus, PerformanceTier
|
||||
)
|
||||
from ..storage import get_session
|
||||
from ..logging import get_logger
|
||||
from ..app_logging import get_logger
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -25,7 +25,7 @@ from cryptography.hazmat.primitives.serialization import (
|
||||
|
||||
from ..schemas import ConfidentialTransaction, ConfidentialAccessLog
|
||||
from ..config import settings
|
||||
from ..logging import get_logger
|
||||
from ..app_logging import get_logger
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -18,7 +18,7 @@ from ..repositories.confidential import (
|
||||
KeyRotationRepository
|
||||
)
|
||||
from ..config import settings
|
||||
from ..logging import get_logger
|
||||
from ..app_logging import get_logger
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -17,7 +17,7 @@ from cryptography.hazmat.primitives.ciphers.aead import AESGCM
|
||||
|
||||
from ..schemas import KeyPair, KeyRotationLog, AuditAuthorization
|
||||
from ..config import settings
|
||||
from ..logging import get_logger
|
||||
from ..app_logging import get_logger
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -36,11 +36,11 @@ class MarketplaceService:
|
||||
stmt = stmt.where(MarketplaceOffer.status == normalised)
|
||||
|
||||
stmt = stmt.offset(offset).limit(limit)
|
||||
offers = self.session.execute(stmt).all()
|
||||
offers = self.session.execute(stmt).scalars().all()
|
||||
return [self._to_offer_view(o) for o in offers]
|
||||
|
||||
def get_stats(self) -> MarketplaceStatsView:
|
||||
offers = self.session.execute(select(MarketplaceOffer)).all()
|
||||
offers = self.session.execute(select(MarketplaceOffer)).scalars().all()
|
||||
open_offers = [offer for offer in offers if offer.status == "open"]
|
||||
|
||||
total_offers = len(offers)
|
||||
|
||||
@@ -8,8 +8,11 @@ from datetime import datetime
|
||||
|
||||
|
||||
|
||||
import sys
|
||||
from aitbc_crypto.signing import ReceiptSigner
|
||||
|
||||
import sys
|
||||
|
||||
from sqlmodel import Session
|
||||
|
||||
from ..config import settings
|
||||
|
||||
@@ -14,7 +14,7 @@ from ..domain.bounty import (
|
||||
PerformanceTier, EcosystemMetrics
|
||||
)
|
||||
from ..storage import get_session
|
||||
from ..logging import get_logger
|
||||
from ..app_logging import get_logger
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -13,7 +13,7 @@ import logging
|
||||
|
||||
from ..schemas import Receipt, JobResult
|
||||
from ..config import settings
|
||||
from ..logging import get_logger
|
||||
from ..app_logging import get_logger
|
||||
|
||||
logger = get_logger(__name__)
|
||||
|
||||
|
||||
@@ -62,7 +62,13 @@ def get_engine() -> Engine:
|
||||
return _engine
|
||||
|
||||
|
||||
from app.domain import *
|
||||
# Import only essential models for database initialization
|
||||
# This avoids loading all domain models which causes 2+ minute startup delays
|
||||
from app.domain import (
|
||||
Job, Miner, MarketplaceOffer, MarketplaceBid,
|
||||
User, Wallet, Transaction, UserSession,
|
||||
JobPayment, PaymentEscrow, JobReceipt
|
||||
)
|
||||
|
||||
def init_db() -> Engine:
|
||||
"""Initialize database tables and ensure data directory exists."""
|
||||
|
||||
@@ -0,0 +1,10 @@
|
||||
{
|
||||
"protocol": "groth16",
|
||||
"curve": "bn128",
|
||||
"nPublic": 1,
|
||||
"vk_alpha_1": ["0x1234", "0x5678", "0x0"],
|
||||
"vk_beta_2": [["0x1234", "0x5678", "0x0"], ["0x1234", "0x5678", "0x0"]],
|
||||
"vk_gamma_2": [["0x1234", "0x5678", "0x0"], ["0x1234", "0x5678", "0x0"]],
|
||||
"vk_delta_2": [["0x1234", "0x5678", "0x0"], ["0x1234", "0x5678", "0x0"]],
|
||||
"IC": [["0x1234", "0x5678", "0x0"]]
|
||||
}
|
||||
@@ -0,0 +1,10 @@
|
||||
{
|
||||
"protocol": "groth16",
|
||||
"curve": "bn128",
|
||||
"nPublic": 1,
|
||||
"vk_alpha_1": ["0x1234", "0x5678", "0x0"],
|
||||
"vk_beta_2": [["0x1234", "0x5678", "0x0"], ["0x1234", "0x5678", "0x0"]],
|
||||
"vk_gamma_2": [["0x1234", "0x5678", "0x0"], ["0x1234", "0x5678", "0x0"]],
|
||||
"vk_delta_2": [["0x1234", "0x5678", "0x0"], ["0x1234", "0x5678", "0x0"]],
|
||||
"IC": [["0x1234", "0x5678", "0x0"]]
|
||||
}
|
||||
@@ -0,0 +1,10 @@
|
||||
{
|
||||
"protocol": "groth16",
|
||||
"curve": "bn128",
|
||||
"nPublic": 1,
|
||||
"vk_alpha_1": ["0x1234", "0x5678", "0x0"],
|
||||
"vk_beta_2": [["0x1234", "0x5678", "0x0"], ["0x1234", "0x5678", "0x0"]],
|
||||
"vk_gamma_2": [["0x1234", "0x5678", "0x0"], ["0x1234", "0x5678", "0x0"]],
|
||||
"vk_delta_2": [["0x1234", "0x5678", "0x0"], ["0x1234", "0x5678", "0x0"]],
|
||||
"IC": [["0x1234", "0x5678", "0x0"]]
|
||||
}
|
||||
@@ -0,0 +1,10 @@
|
||||
{
|
||||
"protocol": "groth16",
|
||||
"curve": "bn128",
|
||||
"nPublic": 1,
|
||||
"vk_alpha_1": ["0x1234", "0x5678", "0x0"],
|
||||
"vk_beta_2": [["0x1234", "0x5678", "0x0"], ["0x1234", "0x5678", "0x0"]],
|
||||
"vk_gamma_2": [["0x1234", "0x5678", "0x0"], ["0x1234", "0x5678", "0x0"]],
|
||||
"vk_delta_2": [["0x1234", "0x5678", "0x0"], ["0x1234", "0x5678", "0x0"]],
|
||||
"IC": [["0x1234", "0x5678", "0x0"]]
|
||||
}
|
||||
@@ -1,6 +1,6 @@
|
||||
[tool.poetry]
|
||||
name = "aitbc-pool-hub"
|
||||
version = "0.1.0"
|
||||
version = "v0.2.3"
|
||||
description = "AITBC Pool Hub Service"
|
||||
authors = ["AITBC Team <team@aitbc.dev>"]
|
||||
readme = "README.md"
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
[tool.poetry]
|
||||
name = "aitbc-wallet-daemon"
|
||||
version = "0.1.0"
|
||||
version = "v0.2.3"
|
||||
description = "AITBC Wallet Daemon Service"
|
||||
authors = ["AITBC Team <team@aitbc.dev>"]
|
||||
readme = "README.md"
|
||||
|
||||
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
@@ -12,10 +12,10 @@ def ai_group():
|
||||
pass
|
||||
|
||||
@ai_group.command()
|
||||
@click.option('--port', default=8008, show_default=True, help='AI provider port')
|
||||
@click.option('--port', default=8015, show_default=True, help='AI provider port')
|
||||
@click.option('--model', default='qwen3:8b', show_default=True, help='Ollama model name')
|
||||
@click.option('--wallet', 'provider_wallet', required=True, help='Provider wallet address (for verification)')
|
||||
@click.option('--marketplace-url', default='http://127.0.0.1:8014', help='Marketplace API base URL')
|
||||
@click.option('--marketplace-url', default='http://127.0.0.1:8002', help='Marketplace API base URL')
|
||||
def status(port, model, provider_wallet, marketplace_url):
|
||||
"""Check AI provider service status."""
|
||||
try:
|
||||
@@ -33,10 +33,10 @@ def status(port, model, provider_wallet, marketplace_url):
|
||||
click.echo(f"❌ Error checking AI Provider: {e}")
|
||||
|
||||
@ai_group.command()
|
||||
@click.option('--port', default=8008, show_default=True, help='AI provider port')
|
||||
@click.option('--port', default=8015, show_default=True, help='AI provider port')
|
||||
@click.option('--model', default='qwen3:8b', show_default=True, help='Ollama model name')
|
||||
@click.option('--wallet', 'provider_wallet', required=True, help='Provider wallet address (for verification)')
|
||||
@click.option('--marketplace-url', default='http://127.0.0.1:8014', help='Marketplace API base URL')
|
||||
@click.option('--marketplace-url', default='http://127.0.0.1:8002', help='Marketplace API base URL')
|
||||
def start(port, model, provider_wallet, marketplace_url):
|
||||
"""Start AI provider service - provides setup instructions"""
|
||||
click.echo(f"AI Provider Service Setup:")
|
||||
@@ -62,7 +62,7 @@ def stop():
|
||||
|
||||
@ai_group.command()
|
||||
@click.option('--to', required=True, help='Provider host (IP)')
|
||||
@click.option('--port', default=8008, help='Provider port')
|
||||
@click.option('--port', default=8015, help='Provider port')
|
||||
@click.option('--prompt', required=True, help='Prompt to send')
|
||||
@click.option('--buyer-wallet', 'buyer_wallet', required=True, help='Buyer wallet name (in local wallet store)')
|
||||
@click.option('--provider-wallet', 'provider_wallet', required=True, help='Provider wallet address (recipient)')
|
||||
|
||||
@@ -81,8 +81,8 @@ def status(service):
|
||||
checks = [
|
||||
"Coordinator API: http://localhost:8000/health",
|
||||
"Blockchain Node: http://localhost:8006/status",
|
||||
"Marketplace: http://localhost:8014/health",
|
||||
"Wallet Service: http://localhost:8002/status"
|
||||
"Marketplace: http://localhost:8002/health",
|
||||
"Wallet Service: http://localhost:8003/status"
|
||||
]
|
||||
|
||||
for check in checks:
|
||||
|
||||
@@ -11,10 +11,10 @@ def _get_explorer_endpoint(ctx):
|
||||
"""Get explorer endpoint from config or default"""
|
||||
try:
|
||||
config = ctx.obj['config']
|
||||
# Default to port 8016 for blockchain explorer
|
||||
return getattr(config, 'explorer_url', 'http://10.1.223.1:8016')
|
||||
# Default to port 8004 for blockchain explorer
|
||||
return getattr(config, 'explorer_url', 'http://10.1.223.1:8004')
|
||||
except:
|
||||
return "http://10.1.223.1:8016"
|
||||
return "http://10.1.223.1:8004"
|
||||
|
||||
|
||||
def _curl_request(url: str, params: dict = None):
|
||||
|
||||
Binary file not shown.
Binary file not shown.
Binary file not shown.
50
cli/integrate_miner_cli.sh
Executable file
50
cli/integrate_miner_cli.sh
Executable file
@@ -0,0 +1,50 @@
|
||||
#!/bin/bash
|
||||
# AITBC Miner Management Integration Script
|
||||
# This script integrates the miner management functionality with the main AITBC CLI
|
||||
|
||||
echo "🤖 AITBC Miner Management Integration"
|
||||
echo "=================================="
|
||||
|
||||
# Check if miner CLI exists
|
||||
MINER_CLI="/opt/aitbc/cli/miner_cli.py"
|
||||
if [ ! -f "$MINER_CLI" ]; then
|
||||
echo "❌ Error: Miner CLI not found at $MINER_CLI"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Create a symlink in the main CLI directory
|
||||
MAIN_CLI_DIR="/opt/aitbc"
|
||||
MINER_CMD="$MAIN_CLI_DIR/aitbc-miner"
|
||||
|
||||
if [ ! -L "$MINER_CMD" ]; then
|
||||
echo "🔗 Creating symlink: $MINER_CMD -> $MINER_CLI"
|
||||
ln -s "$MINER_CLI" "$MINER_CMD"
|
||||
chmod +x "$MINER_CMD"
|
||||
fi
|
||||
|
||||
# Test the integration
|
||||
echo "🧪 Testing miner CLI integration..."
|
||||
echo ""
|
||||
|
||||
# Test help
|
||||
echo "📋 Testing help command:"
|
||||
$MINER_CMD --help | head -10
|
||||
echo ""
|
||||
|
||||
# Test registration (with test data)
|
||||
echo "📝 Testing registration command:"
|
||||
$MINER_CMD register --miner-id integration-test --wallet ait113e1941cb60f3bb945ec9d412527b6048b73eb2d --gpu-memory 2048 --models qwen3:8b --pricing 0.45 --region integration-test 2>/dev/null | grep "Status:"
|
||||
echo ""
|
||||
|
||||
echo "✅ Miner CLI integration completed!"
|
||||
echo ""
|
||||
echo "🚀 Usage Examples:"
|
||||
echo " $MINER_CMD register --miner-id my-miner --wallet <wallet> --gpu-memory 8192 --models qwen3:8b --pricing 0.50"
|
||||
echo " $MINER_CMD status --miner-id my-miner"
|
||||
echo " $MINER_CMD poll --miner-id my-miner"
|
||||
echo " $MINER_CMD heartbeat --miner-id my-miner"
|
||||
echo " $MINER_CMD result --job-id <job-id> --miner-id my-miner --result 'Job completed'"
|
||||
echo " $MINER_CMD marketplace list"
|
||||
echo " $MINER_CMD marketplace create --miner-id my-miner --price 0.75"
|
||||
echo ""
|
||||
echo "📚 All miner management commands are now available via: $MINER_CMD"
|
||||
254
cli/miner_cli.py
Executable file
254
cli/miner_cli.py
Executable file
@@ -0,0 +1,254 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
AITBC Miner CLI Extension
|
||||
Adds comprehensive miner management commands to AITBC CLI
|
||||
"""
|
||||
|
||||
import sys
|
||||
import os
|
||||
import argparse
|
||||
from pathlib import Path
|
||||
|
||||
# Add the CLI directory to path
|
||||
sys.path.insert(0, str(Path(__file__).parent))
|
||||
|
||||
try:
|
||||
from miner_management import miner_cli_dispatcher
|
||||
except ImportError:
|
||||
print("❌ Error: miner_management module not found")
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
def main():
|
||||
"""Main CLI entry point for miner management"""
|
||||
parser = argparse.ArgumentParser(
|
||||
description="AITBC AI Compute Miner Management",
|
||||
formatter_class=argparse.RawDescriptionHelpFormatter,
|
||||
epilog="""
|
||||
Examples:
|
||||
# Register as AI compute provider
|
||||
python miner_cli.py register --miner-id ai-miner-1 --wallet ait1xyz --gpu-memory 8192 --models qwen3:8b llama3:8b --pricing 0.50
|
||||
|
||||
# Check miner status
|
||||
python miner_cli.py status --miner-id ai-miner-1
|
||||
|
||||
# Poll for jobs
|
||||
python miner_cli.py poll --miner-id ai-miner-1 --max-wait 60
|
||||
|
||||
# Submit job result
|
||||
python miner_cli.py result --job-id job123 --miner-id ai-miner-1 --result "Job completed successfully" --success
|
||||
|
||||
# List marketplace offers
|
||||
python miner_cli.py marketplace list --region us-west
|
||||
|
||||
# Create marketplace offer
|
||||
python miner_cli.py marketplace create --miner-id ai-miner-1 --price 0.75 --capacity 2
|
||||
"""
|
||||
)
|
||||
|
||||
parser.add_argument("--coordinator-url", default="http://localhost:8000",
|
||||
help="Coordinator API URL")
|
||||
parser.add_argument("--api-key", default="miner_prod_key_use_real_value",
|
||||
help="Miner API key")
|
||||
|
||||
subparsers = parser.add_subparsers(dest="action", help="Miner management actions")
|
||||
|
||||
# Register command
|
||||
register_parser = subparsers.add_parser("register", help="Register as AI compute provider")
|
||||
register_parser.add_argument("--miner-id", required=True, help="Unique miner identifier")
|
||||
register_parser.add_argument("--wallet", required=True, help="Wallet address for rewards")
|
||||
register_parser.add_argument("--capabilities", help="JSON string of miner capabilities")
|
||||
register_parser.add_argument("--gpu-memory", type=int, help="GPU memory in MB")
|
||||
register_parser.add_argument("--models", nargs="+", help="Supported AI models")
|
||||
register_parser.add_argument("--pricing", type=float, help="Price per hour")
|
||||
register_parser.add_argument("--concurrency", type=int, default=1, help="Max concurrent jobs")
|
||||
register_parser.add_argument("--region", help="Geographic region")
|
||||
|
||||
# Status command
|
||||
status_parser = subparsers.add_parser("status", help="Get miner status")
|
||||
status_parser.add_argument("--miner-id", required=True, help="Miner identifier")
|
||||
|
||||
# Heartbeat command
|
||||
heartbeat_parser = subparsers.add_parser("heartbeat", help="Send miner heartbeat")
|
||||
heartbeat_parser.add_argument("--miner-id", required=True, help="Miner identifier")
|
||||
heartbeat_parser.add_argument("--inflight", type=int, default=0, help="Currently running jobs")
|
||||
heartbeat_parser.add_argument("--status", default="ONLINE", help="Miner status")
|
||||
|
||||
# Poll command
|
||||
poll_parser = subparsers.add_parser("poll", help="Poll for available jobs")
|
||||
poll_parser.add_argument("--miner-id", required=True, help="Miner identifier")
|
||||
poll_parser.add_argument("--max-wait", type=int, default=30, help="Max wait time in seconds")
|
||||
poll_parser.add_argument("--auto-execute", action="store_true", help="Automatically execute assigned jobs")
|
||||
|
||||
# Result command
|
||||
result_parser = subparsers.add_parser("result", help="Submit job result")
|
||||
result_parser.add_argument("--job-id", required=True, help="Job identifier")
|
||||
result_parser.add_argument("--miner-id", required=True, help="Miner identifier")
|
||||
result_parser.add_argument("--result", help="Job result (JSON string)")
|
||||
result_parser.add_argument("--result-file", help="File containing job result")
|
||||
result_parser.add_argument("--success", action="store_true", help="Job completed successfully")
|
||||
result_parser.add_argument("--duration", type=int, help="Job duration in milliseconds")
|
||||
|
||||
# Update command
|
||||
update_parser = subparsers.add_parser("update", help="Update miner capabilities")
|
||||
update_parser.add_argument("--miner-id", required=True, help="Miner identifier")
|
||||
update_parser.add_argument("--capabilities", help="JSON string of updated capabilities")
|
||||
update_parser.add_argument("--gpu-memory", type=int, help="Updated GPU memory in MB")
|
||||
update_parser.add_argument("--models", nargs="+", help="Updated supported AI models")
|
||||
update_parser.add_argument("--pricing", type=float, help="Updated price per hour")
|
||||
update_parser.add_argument("--concurrency", type=int, help="Updated max concurrent jobs")
|
||||
update_parser.add_argument("--region", help="Updated geographic region")
|
||||
update_parser.add_argument("--wallet", help="Updated wallet address")
|
||||
|
||||
# Earnings command
|
||||
earnings_parser = subparsers.add_parser("earnings", help="Check miner earnings")
|
||||
earnings_parser.add_argument("--miner-id", required=True, help="Miner identifier")
|
||||
earnings_parser.add_argument("--period", choices=["day", "week", "month", "all"], default="all", help="Earnings period")
|
||||
|
||||
# Marketplace commands
|
||||
marketplace_parser = subparsers.add_parser("marketplace", help="Manage marketplace offers")
|
||||
marketplace_subparsers = marketplace_parser.add_subparsers(dest="marketplace_action", help="Marketplace actions")
|
||||
|
||||
# Marketplace list
|
||||
market_list_parser = marketplace_subparsers.add_parser("list", help="List marketplace offers")
|
||||
market_list_parser.add_argument("--miner-id", help="Filter by miner ID")
|
||||
market_list_parser.add_argument("--region", help="Filter by region")
|
||||
|
||||
# Marketplace create
|
||||
market_create_parser = marketplace_subparsers.add_parser("create", help="Create marketplace offer")
|
||||
market_create_parser.add_argument("--miner-id", required=True, help="Miner identifier")
|
||||
market_create_parser.add_argument("--price", type=float, required=True, help="Offer price per hour")
|
||||
market_create_parser.add_argument("--capacity", type=int, default=1, help="Available capacity")
|
||||
market_create_parser.add_argument("--region", help="Geographic region")
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
if not args.action:
|
||||
parser.print_help()
|
||||
return
|
||||
|
||||
# Initialize action variable
|
||||
action = args.action
|
||||
|
||||
# Prepare kwargs for the dispatcher
|
||||
kwargs = {
|
||||
"coordinator_url": args.coordinator_url,
|
||||
"api_key": args.api_key
|
||||
}
|
||||
|
||||
# Add action-specific arguments
|
||||
if args.action == "register":
|
||||
kwargs.update({
|
||||
"miner_id": args.miner_id,
|
||||
"wallet": args.wallet,
|
||||
"capabilities": args.capabilities,
|
||||
"gpu_memory": args.gpu_memory,
|
||||
"models": args.models,
|
||||
"pricing": args.pricing,
|
||||
"concurrency": args.concurrency,
|
||||
"region": args.region
|
||||
})
|
||||
|
||||
elif args.action == "status":
|
||||
kwargs["miner_id"] = args.miner_id
|
||||
|
||||
elif args.action == "heartbeat":
|
||||
kwargs.update({
|
||||
"miner_id": args.miner_id,
|
||||
"inflight": args.inflight,
|
||||
"status": args.status
|
||||
})
|
||||
|
||||
elif args.action == "poll":
|
||||
kwargs.update({
|
||||
"miner_id": args.miner_id,
|
||||
"max_wait": args.max_wait,
|
||||
"auto_execute": args.auto_execute
|
||||
})
|
||||
|
||||
elif args.action == "result":
|
||||
kwargs.update({
|
||||
"job_id": args.job_id,
|
||||
"miner_id": args.miner_id,
|
||||
"result": args.result,
|
||||
"result_file": args.result_file,
|
||||
"success": args.success,
|
||||
"duration": args.duration
|
||||
})
|
||||
|
||||
elif args.action == "update":
|
||||
kwargs.update({
|
||||
"miner_id": args.miner_id,
|
||||
"capabilities": args.capabilities,
|
||||
"gpu_memory": args.gpu_memory,
|
||||
"models": args.models,
|
||||
"pricing": args.pricing,
|
||||
"concurrency": args.concurrency,
|
||||
"region": args.region,
|
||||
"wallet": args.wallet
|
||||
})
|
||||
|
||||
elif args.action == "earnings":
|
||||
kwargs.update({
|
||||
"miner_id": args.miner_id,
|
||||
"period": args.period
|
||||
})
|
||||
|
||||
elif args.action == "marketplace":
|
||||
action = args.action
|
||||
if args.marketplace_action == "list":
|
||||
kwargs.update({
|
||||
"miner_id": getattr(args, 'miner_id', None),
|
||||
"region": getattr(args, 'region', None)
|
||||
})
|
||||
action = "marketplace_list"
|
||||
elif args.marketplace_action == "create":
|
||||
kwargs.update({
|
||||
"miner_id": args.miner_id,
|
||||
"price": args.price,
|
||||
"capacity": args.capacity,
|
||||
"region": getattr(args, 'region', None)
|
||||
})
|
||||
action = "marketplace_create"
|
||||
else:
|
||||
print("❌ Unknown marketplace action")
|
||||
return
|
||||
|
||||
result = miner_cli_dispatcher(action, **kwargs)
|
||||
|
||||
# Display results
|
||||
if result:
|
||||
print("\n" + "="*60)
|
||||
print(f"🤖 AITBC Miner Management - {action.upper()}")
|
||||
print("="*60)
|
||||
|
||||
if "status" in result:
|
||||
print(f"Status: {result['status']}")
|
||||
|
||||
if result.get("status", "").startswith("✅"):
|
||||
# Success - show details
|
||||
for key, value in result.items():
|
||||
if key not in ["action", "status"]:
|
||||
if isinstance(value, (dict, list)):
|
||||
print(f"{key}:")
|
||||
if isinstance(value, dict):
|
||||
for k, v in value.items():
|
||||
print(f" {k}: {v}")
|
||||
else:
|
||||
for item in value:
|
||||
print(f" - {item}")
|
||||
else:
|
||||
print(f"{key}: {value}")
|
||||
else:
|
||||
# Error or info - show all relevant fields
|
||||
for key, value in result.items():
|
||||
if key != "action":
|
||||
print(f"{key}: {value}")
|
||||
|
||||
print("="*60)
|
||||
else:
|
||||
print("❌ No response from server")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
505
cli/miner_management.py
Normal file
505
cli/miner_management.py
Normal file
@@ -0,0 +1,505 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
AITBC Miner Management Module
|
||||
Complete command-line interface for AI compute miner operations including:
|
||||
- Miner Registration
|
||||
- Status Management
|
||||
- Job Polling & Execution
|
||||
- Marketplace Integration
|
||||
- Payment Management
|
||||
"""
|
||||
|
||||
import json
|
||||
import time
|
||||
import requests
|
||||
from typing import Optional, Dict, Any
|
||||
|
||||
# Default configuration
|
||||
DEFAULT_COORDINATOR_URL = "http://localhost:8000"
|
||||
DEFAULT_API_KEY = "miner_prod_key_use_real_value"
|
||||
|
||||
|
||||
def register_miner(
|
||||
miner_id: str,
|
||||
wallet: str,
|
||||
api_key: str = DEFAULT_API_KEY,
|
||||
coordinator_url: str = DEFAULT_COORDINATOR_URL,
|
||||
capabilities: Optional[str] = None,
|
||||
gpu_memory: Optional[int] = None,
|
||||
models: Optional[list] = None,
|
||||
pricing: Optional[float] = None,
|
||||
concurrency: int = 1,
|
||||
region: Optional[str] = None
|
||||
) -> Optional[Dict]:
|
||||
"""Register miner as AI compute provider"""
|
||||
try:
|
||||
headers = {
|
||||
"X-Api-Key": api_key,
|
||||
"X-Miner-ID": miner_id,
|
||||
"Content-Type": "application/json"
|
||||
}
|
||||
|
||||
# Build capabilities from arguments
|
||||
caps = {}
|
||||
|
||||
if gpu_memory:
|
||||
caps["gpu_memory"] = gpu_memory
|
||||
caps["gpu_memory_gb"] = gpu_memory
|
||||
if models:
|
||||
caps["models"] = models
|
||||
caps["supported_models"] = models
|
||||
if pricing:
|
||||
caps["pricing_per_hour"] = pricing
|
||||
caps["price_per_hour"] = pricing
|
||||
caps["gpu"] = "AI-GPU"
|
||||
caps["gpu_count"] = 1
|
||||
caps["cuda_version"] = "12.0"
|
||||
|
||||
# Override with capabilities JSON if provided
|
||||
if capabilities:
|
||||
caps.update(json.loads(capabilities))
|
||||
|
||||
payload = {
|
||||
"wallet_address": wallet,
|
||||
"capabilities": caps,
|
||||
"concurrency": concurrency,
|
||||
"region": region
|
||||
}
|
||||
|
||||
response = requests.post(
|
||||
f"{coordinator_url}/v1/miners/register",
|
||||
headers=headers,
|
||||
json=payload
|
||||
)
|
||||
|
||||
if response.status_code == 200:
|
||||
result = response.json()
|
||||
return {
|
||||
"action": "register",
|
||||
"miner_id": miner_id,
|
||||
"status": "✅ Registered successfully",
|
||||
"session_token": result.get("session_token"),
|
||||
"coordinator_url": coordinator_url,
|
||||
"capabilities": caps
|
||||
}
|
||||
else:
|
||||
return {
|
||||
"action": "register",
|
||||
"status": "❌ Registration failed",
|
||||
"error": response.text,
|
||||
"status_code": response.status_code
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
return {"action": "register", "status": f"❌ Error: {str(e)}"}
|
||||
|
||||
|
||||
def get_miner_status(
|
||||
miner_id: str,
|
||||
api_key: str = DEFAULT_API_KEY,
|
||||
coordinator_url: str = DEFAULT_COORDINATOR_URL
|
||||
) -> Optional[Dict]:
|
||||
"""Get miner status and statistics"""
|
||||
try:
|
||||
# Use admin API key to get miner status
|
||||
admin_api_key = api_key.replace("miner_", "admin_")
|
||||
headers = {"X-Api-Key": admin_api_key}
|
||||
|
||||
response = requests.get(
|
||||
f"{coordinator_url}/v1/admin/miners",
|
||||
headers=headers
|
||||
)
|
||||
|
||||
if response.status_code == 200:
|
||||
miners = response.json().get("items", [])
|
||||
miner_info = next((m for m in miners if m["miner_id"] == miner_id), None)
|
||||
|
||||
if miner_info:
|
||||
return {
|
||||
"action": "status",
|
||||
"miner_id": miner_id,
|
||||
"status": f"✅ {miner_info['status']}",
|
||||
"inflight": miner_info["inflight"],
|
||||
"concurrency": miner_info["concurrency"],
|
||||
"region": miner_info["region"],
|
||||
"last_heartbeat": miner_info["last_heartbeat"],
|
||||
"jobs_completed": miner_info["jobs_completed"],
|
||||
"jobs_failed": miner_info["jobs_failed"],
|
||||
"average_job_duration_ms": miner_info["average_job_duration_ms"],
|
||||
"success_rate": (
|
||||
miner_info["jobs_completed"] /
|
||||
max(1, miner_info["jobs_completed"] + miner_info["jobs_failed"]) * 100
|
||||
)
|
||||
}
|
||||
else:
|
||||
return {
|
||||
"action": "status",
|
||||
"miner_id": miner_id,
|
||||
"status": "❌ Miner not found"
|
||||
}
|
||||
else:
|
||||
return {"action": "status", "status": "❌ Failed to get status", "error": response.text}
|
||||
|
||||
except Exception as e:
|
||||
return {"action": "status", "status": f"❌ Error: {str(e)}"}
|
||||
|
||||
|
||||
def send_heartbeat(
|
||||
miner_id: str,
|
||||
api_key: str = DEFAULT_API_KEY,
|
||||
coordinator_url: str = DEFAULT_COORDINATOR_URL,
|
||||
inflight: int = 0,
|
||||
status: str = "ONLINE"
|
||||
) -> Optional[Dict]:
|
||||
"""Send miner heartbeat"""
|
||||
try:
|
||||
headers = {
|
||||
"X-Api-Key": api_key,
|
||||
"X-Miner-ID": miner_id,
|
||||
"Content-Type": "application/json"
|
||||
}
|
||||
|
||||
payload = {
|
||||
"inflight": inflight,
|
||||
"status": status,
|
||||
"metadata": {
|
||||
"timestamp": time.time(),
|
||||
"version": "1.0.0",
|
||||
"system_info": "AI Compute Miner"
|
||||
}
|
||||
}
|
||||
|
||||
response = requests.post(
|
||||
f"{coordinator_url}/v1/miners/heartbeat",
|
||||
headers=headers,
|
||||
json=payload
|
||||
)
|
||||
|
||||
if response.status_code == 200:
|
||||
return {
|
||||
"action": "heartbeat",
|
||||
"miner_id": miner_id,
|
||||
"status": "✅ Heartbeat sent successfully",
|
||||
"inflight": inflight,
|
||||
"miner_status": status
|
||||
}
|
||||
else:
|
||||
return {"action": "heartbeat", "status": "❌ Heartbeat failed", "error": response.text}
|
||||
|
||||
except Exception as e:
|
||||
return {"action": "heartbeat", "status": f"❌ Error: {str(e)}"}
|
||||
|
||||
|
||||
def poll_jobs(
|
||||
miner_id: str,
|
||||
api_key: str = DEFAULT_API_KEY,
|
||||
coordinator_url: str = DEFAULT_COORDINATOR_URL,
|
||||
max_wait: int = 30,
|
||||
auto_execute: bool = False
|
||||
) -> Optional[Dict]:
|
||||
"""Poll for available jobs"""
|
||||
try:
|
||||
headers = {
|
||||
"X-Api-Key": api_key,
|
||||
"X-Miner-ID": miner_id,
|
||||
"Content-Type": "application/json"
|
||||
}
|
||||
|
||||
payload = {"max_wait_seconds": max_wait}
|
||||
|
||||
response = requests.post(
|
||||
f"{coordinator_url}/v1/miners/poll",
|
||||
headers=headers,
|
||||
json=payload
|
||||
)
|
||||
|
||||
if response.status_code == 200 and response.content:
|
||||
job = response.json()
|
||||
result = {
|
||||
"action": "poll",
|
||||
"miner_id": miner_id,
|
||||
"status": "✅ Job assigned",
|
||||
"job_id": job.get("job_id"),
|
||||
"payload": job.get("payload"),
|
||||
"constraints": job.get("constraints"),
|
||||
"assigned_at": time.strftime("%Y-%m-%d %H:%M:%S")
|
||||
}
|
||||
|
||||
if auto_execute:
|
||||
result["auto_execution"] = "🤖 Job execution would start here"
|
||||
result["execution_status"] = "Ready to execute"
|
||||
|
||||
return result
|
||||
elif response.status_code == 204:
|
||||
return {
|
||||
"action": "poll",
|
||||
"miner_id": miner_id,
|
||||
"status": "⏸️ No jobs available",
|
||||
"message": "No jobs in queue"
|
||||
}
|
||||
else:
|
||||
return {"action": "poll", "status": "❌ Poll failed", "error": response.text}
|
||||
|
||||
except Exception as e:
|
||||
return {"action": "poll", "status": f"❌ Error: {str(e)}"}
|
||||
|
||||
|
||||
def submit_job_result(
|
||||
job_id: str,
|
||||
miner_id: str,
|
||||
result: str,
|
||||
api_key: str = DEFAULT_API_KEY,
|
||||
coordinator_url: str = DEFAULT_COORDINATOR_URL,
|
||||
success: bool = True,
|
||||
duration: Optional[int] = None,
|
||||
result_file: Optional[str] = None
|
||||
) -> Optional[Dict]:
|
||||
"""Submit job result"""
|
||||
try:
|
||||
headers = {
|
||||
"X-Api-Key": api_key,
|
||||
"X-Miner-ID": miner_id,
|
||||
"Content-Type": "application/json"
|
||||
}
|
||||
|
||||
# Load result from file if specified
|
||||
if result_file:
|
||||
with open(result_file, 'r') as f:
|
||||
result = f.read()
|
||||
|
||||
payload = {
|
||||
"result": result,
|
||||
"success": success,
|
||||
"metrics": {
|
||||
"duration_ms": duration,
|
||||
"completed_at": time.time()
|
||||
}
|
||||
}
|
||||
|
||||
response = requests.post(
|
||||
f"{coordinator_url}/v1/miners/{job_id}/result",
|
||||
headers=headers,
|
||||
json=payload
|
||||
)
|
||||
|
||||
if response.status_code == 200:
|
||||
return {
|
||||
"action": "result",
|
||||
"job_id": job_id,
|
||||
"miner_id": miner_id,
|
||||
"status": "✅ Result submitted successfully",
|
||||
"success": success,
|
||||
"duration_ms": duration
|
||||
}
|
||||
else:
|
||||
return {"action": "result", "status": "❌ Result submission failed", "error": response.text}
|
||||
|
||||
except Exception as e:
|
||||
return {"action": "result", "status": f"❌ Error: {str(e)}"}
|
||||
|
||||
|
||||
def update_capabilities(
|
||||
miner_id: str,
|
||||
api_key: str = DEFAULT_API_KEY,
|
||||
coordinator_url: str = DEFAULT_COORDINATOR_URL,
|
||||
capabilities: Optional[str] = None,
|
||||
gpu_memory: Optional[int] = None,
|
||||
models: Optional[list] = None,
|
||||
pricing: Optional[float] = None,
|
||||
concurrency: Optional[int] = None,
|
||||
region: Optional[str] = None,
|
||||
wallet: Optional[str] = None
|
||||
) -> Optional[Dict]:
|
||||
"""Update miner capabilities"""
|
||||
try:
|
||||
headers = {
|
||||
"X-Api-Key": api_key,
|
||||
"X-Miner-ID": miner_id,
|
||||
"Content-Type": "application/json"
|
||||
}
|
||||
|
||||
# Build capabilities from arguments
|
||||
caps = {}
|
||||
if gpu_memory:
|
||||
caps["gpu_memory"] = gpu_memory
|
||||
caps["gpu_memory_gb"] = gpu_memory
|
||||
if models:
|
||||
caps["models"] = models
|
||||
caps["supported_models"] = models
|
||||
if pricing:
|
||||
caps["pricing_per_hour"] = pricing
|
||||
caps["price_per_hour"] = pricing
|
||||
|
||||
# Override with capabilities JSON if provided
|
||||
if capabilities:
|
||||
caps.update(json.loads(capabilities))
|
||||
|
||||
payload = {
|
||||
"capabilities": caps,
|
||||
"concurrency": concurrency,
|
||||
"region": region
|
||||
}
|
||||
|
||||
if wallet:
|
||||
payload["wallet_address"] = wallet
|
||||
|
||||
response = requests.put(
|
||||
f"{coordinator_url}/v1/miners/{miner_id}/capabilities",
|
||||
headers=headers,
|
||||
json=payload
|
||||
)
|
||||
|
||||
if response.status_code == 200:
|
||||
return {
|
||||
"action": "update",
|
||||
"miner_id": miner_id,
|
||||
"status": "✅ Capabilities updated successfully",
|
||||
"updated_capabilities": caps
|
||||
}
|
||||
else:
|
||||
return {"action": "update", "status": "❌ Update failed", "error": response.text}
|
||||
|
||||
except Exception as e:
|
||||
return {"action": "update", "status": f"❌ Error: {str(e)}"}
|
||||
|
||||
|
||||
def check_earnings(
|
||||
miner_id: str,
|
||||
api_key: str = DEFAULT_API_KEY,
|
||||
coordinator_url: str = DEFAULT_COORDINATOR_URL,
|
||||
period: str = "all"
|
||||
) -> Optional[Dict]:
|
||||
"""Check miner earnings (placeholder for payment integration)"""
|
||||
try:
|
||||
# This would integrate with payment system when implemented
|
||||
return {
|
||||
"action": "earnings",
|
||||
"miner_id": miner_id,
|
||||
"period": period,
|
||||
"status": "📊 Earnings calculation",
|
||||
"total_earnings": 0.0,
|
||||
"jobs_completed": 0,
|
||||
"average_payment": 0.0,
|
||||
"note": "Payment integration coming soon"
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
return {"action": "earnings", "status": f"❌ Error: {str(e)}"}
|
||||
|
||||
|
||||
def list_marketplace_offers(
|
||||
miner_id: Optional[str] = None,
|
||||
region: Optional[str] = None,
|
||||
api_key: str = DEFAULT_API_KEY,
|
||||
coordinator_url: str = DEFAULT_COORDINATOR_URL
|
||||
) -> Optional[Dict]:
|
||||
"""List marketplace offers"""
|
||||
try:
|
||||
admin_headers = {"X-Api-Key": api_key.replace("miner_", "admin_")}
|
||||
|
||||
params = {}
|
||||
if region:
|
||||
params["region"] = region
|
||||
|
||||
response = requests.get(
|
||||
f"{coordinator_url}/v1/marketplace/miner-offers",
|
||||
headers=admin_headers,
|
||||
params=params
|
||||
)
|
||||
|
||||
if response.status_code == 200:
|
||||
offers = response.json()
|
||||
|
||||
# Filter by miner if specified
|
||||
if miner_id:
|
||||
offers = [o for o in offers if miner_id in str(o).lower()]
|
||||
|
||||
return {
|
||||
"action": "marketplace_list",
|
||||
"status": "✅ Offers retrieved",
|
||||
"offers": offers,
|
||||
"count": len(offers),
|
||||
"region_filter": region,
|
||||
"miner_filter": miner_id
|
||||
}
|
||||
else:
|
||||
return {"action": "marketplace_list", "status": "❌ Failed to get offers", "error": response.text}
|
||||
|
||||
except Exception as e:
|
||||
return {"action": "marketplace_list", "status": f"❌ Error: {str(e)}"}
|
||||
|
||||
|
||||
def create_marketplace_offer(
|
||||
miner_id: str,
|
||||
price: float,
|
||||
api_key: str = DEFAULT_API_KEY,
|
||||
coordinator_url: str = DEFAULT_COORDINATOR_URL,
|
||||
capacity: int = 1,
|
||||
region: Optional[str] = None
|
||||
) -> Optional[Dict]:
|
||||
"""Create marketplace offer"""
|
||||
try:
|
||||
admin_headers = {"X-Api-Key": api_key.replace("miner_", "admin_")}
|
||||
|
||||
payload = {
|
||||
"miner_id": miner_id,
|
||||
"price": price,
|
||||
"capacity": capacity,
|
||||
"region": region
|
||||
}
|
||||
|
||||
response = requests.post(
|
||||
f"{coordinator_url}/v1/marketplace/offers",
|
||||
headers=admin_headers,
|
||||
json=payload
|
||||
)
|
||||
|
||||
if response.status_code == 200:
|
||||
return {
|
||||
"action": "marketplace_create",
|
||||
"miner_id": miner_id,
|
||||
"status": "✅ Offer created successfully",
|
||||
"price": price,
|
||||
"capacity": capacity,
|
||||
"region": region
|
||||
}
|
||||
else:
|
||||
return {"action": "marketplace_create", "status": "❌ Offer creation failed", "error": response.text}
|
||||
|
||||
except Exception as e:
|
||||
return {"action": "marketplace_create", "status": f"❌ Error: {str(e)}"}
|
||||
|
||||
|
||||
# Main function for CLI integration
|
||||
def miner_cli_dispatcher(action: str, **kwargs) -> Optional[Dict]:
|
||||
"""Main dispatcher for miner management CLI commands"""
|
||||
|
||||
actions = {
|
||||
"register": register_miner,
|
||||
"status": get_miner_status,
|
||||
"heartbeat": send_heartbeat,
|
||||
"poll": poll_jobs,
|
||||
"result": submit_job_result,
|
||||
"update": update_capabilities,
|
||||
"earnings": check_earnings,
|
||||
"marketplace_list": list_marketplace_offers,
|
||||
"marketplace_create": create_marketplace_offer
|
||||
}
|
||||
|
||||
if action in actions:
|
||||
return actions[action](**kwargs)
|
||||
else:
|
||||
return {
|
||||
"action": action,
|
||||
"status": f"❌ Unknown action. Available: {', '.join(actions.keys())}"
|
||||
}
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Test the module
|
||||
print("🚀 AITBC Miner Management Module")
|
||||
print("Available functions:")
|
||||
for func in [register_miner, get_miner_status, send_heartbeat, poll_jobs,
|
||||
submit_job_result, update_capabilities, check_earnings,
|
||||
list_marketplace_offers, create_marketplace_offer]:
|
||||
print(f" - {func.__name__}")
|
||||
28
cli/requirements-cli.txt
Normal file
28
cli/requirements-cli.txt
Normal file
@@ -0,0 +1,28 @@
|
||||
# AITBC CLI Requirements
|
||||
# Specific dependencies for the AITBC CLI tool
|
||||
|
||||
# Core CLI Dependencies
|
||||
requests>=2.32.0
|
||||
cryptography>=46.0.0
|
||||
pydantic>=2.12.0
|
||||
python-dotenv>=1.2.0
|
||||
|
||||
# CLI Enhancement Dependencies
|
||||
click>=8.1.0
|
||||
rich>=13.0.0
|
||||
tabulate>=0.9.0
|
||||
colorama>=0.4.4
|
||||
keyring>=23.0.0
|
||||
click-completion>=0.5.2
|
||||
|
||||
# JSON & Data Processing
|
||||
orjson>=3.10.0
|
||||
python-dateutil>=2.9.0
|
||||
pytz>=2024.1
|
||||
|
||||
# Blockchain & Cryptocurrency
|
||||
base58>=2.1.1
|
||||
ecdsa>=0.19.0
|
||||
|
||||
# Utilities
|
||||
psutil>=5.9.0
|
||||
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
@@ -1,3 +1,3 @@
|
||||
# AITBC CLI Configuration
|
||||
# Copy to .aitbc.yaml and adjust for your environment
|
||||
coordinator_url: http://127.0.0.1:18000
|
||||
coordinator_url: http://127.0.0.1:8000
|
||||
|
||||
320
config/.env.production
Executable file
320
config/.env.production
Executable file
@@ -0,0 +1,320 @@
|
||||
# ⚠️ DEPRECATED: This file is legacy and no longer used
|
||||
# ✅ USE INSTEAD: /etc/aitbc/.env (main configuration file)
|
||||
# This file is kept for historical reference only
|
||||
# ==============================================================================
|
||||
|
||||
# AITBC Advanced Agent Features Production Environment Configuration
|
||||
# This file contains sensitive production configuration
|
||||
# DO NOT commit to version control
|
||||
|
||||
# Network Configuration
|
||||
NETWORK=mainnet
|
||||
ENVIRONMENT=production
|
||||
CHAIN_ID=1
|
||||
|
||||
# Production Wallet Configuration
|
||||
PRODUCTION_PRIVATE_KEY=your_production_private_key_here
|
||||
PRODUCTION_MNEMONIC=your_production_mnemonic_here
|
||||
PRODUCTION_DERIVATION_PATH=m/44'/60'/0'/0/0
|
||||
|
||||
# Gas Configuration
|
||||
PRODUCTION_GAS_PRICE=50000000000
|
||||
PRODUCTION_GAS_LIMIT=8000000
|
||||
PRODUCTION_MAX_FEE_PER_GAS=100000000000
|
||||
|
||||
# API Keys
|
||||
ETHERSCAN_API_KEY=your_etherscan_api_key_here
|
||||
INFURA_PROJECT_ID=your_infura_project_id_here
|
||||
INFURA_PROJECT_SECRET=your_infura_project_secret_here
|
||||
|
||||
# Database Configuration
|
||||
DATABASE_URL=postgresql://user:password@localhost:5432/aitbc_production
|
||||
REDIS_URL=redis://localhost:6379/aitbc_production
|
||||
|
||||
# Security Configuration
|
||||
JWT_SECRET=your_jwt_secret_here_very_long_and_secure
|
||||
ENCRYPTION_KEY=your_encryption_key_here_32_characters_long
|
||||
CORS_ORIGIN=https://aitbc.dev
|
||||
RATE_LIMIT_WINDOW=900000
|
||||
RATE_LIMIT_MAX=100
|
||||
|
||||
# Monitoring Configuration
|
||||
PROMETHEUS_PORT=9090
|
||||
GRAFANA_PORT=3001
|
||||
ALERT_MANAGER_PORT=9093
|
||||
SLACK_WEBHOOK_URL=your_slack_webhook_here
|
||||
DISCORD_WEBHOOK_URL=your_discord_webhook_here
|
||||
|
||||
# Backup Configuration
|
||||
BACKUP_S3_BUCKET=aitbc-production-backups
|
||||
BACKUP_S3_REGION=us-east-1
|
||||
BACKUP_S3_ACCESS_KEY=your_s3_access_key_here
|
||||
BACKUP_S3_SECRET_KEY=your_s3_secret_key_here
|
||||
|
||||
# Advanced Agent Features Configuration
|
||||
CROSS_CHAIN_REPUTATION_CONTRACT=0x0000000000000000000000000000000000000000
|
||||
AGENT_COMMUNICATION_CONTRACT=0x0000000000000000000000000000000000000000
|
||||
AGENT_COLLABORATION_CONTRACT=0x0000000000000000000000000000000000000000
|
||||
AGENT_LEARNING_CONTRACT=0x0000000000000000000000000000000000000000
|
||||
AGENT_MARKETPLACE_V2_CONTRACT=0x0000000000000000000000000000000000000000
|
||||
REPUTATION_NFT_CONTRACT=0x0000000000000000000000000000000000000000
|
||||
|
||||
# Service Configuration
|
||||
CROSS_CHAIN_REPUTATION_PORT=8011
|
||||
AGENT_COMMUNICATION_PORT=8012
|
||||
AGENT_COLLABORATION_PORT=8013
|
||||
AGENT_LEARNING_PORT=8014
|
||||
AGENT_AUTONOMY_PORT=8015
|
||||
MARKETPLACE_V2_PORT=8020
|
||||
|
||||
# Cross-Chain Configuration
|
||||
SUPPORTED_CHAINS=ethereum,polygon,arbitrum,optimism,bsc,avalanche,fantom
|
||||
CHAIN_RPC_ENDPOINTS=https://mainnet.infura.io/v3/your_project_id,https://polygon-mainnet.infura.io/v3/your_project_id,https://arbitrum-mainnet.infura.io/v3/your_project_id,https://optimism-mainnet.infura.io/v3/your_project_id,https://bsc-dataseed.infura.io/v3/your_project_id,https://avalanche-mainnet.infura.io/v3/your_project_id,https://fantom-mainnet.infura.io/v3/your_project_id
|
||||
|
||||
# Advanced Learning Configuration
|
||||
MAX_MODEL_SIZE=104857600
|
||||
MAX_TRAINING_TIME=3600
|
||||
DEFAULT_LEARNING_RATE=0.001
|
||||
CONVERGENCE_THRESHOLD=0.001
|
||||
EARLY_STOPPING_PATIENCE=10
|
||||
|
||||
# Agent Communication Configuration
|
||||
MIN_REPUTATION_SCORE=1000
|
||||
BASE_MESSAGE_PRICE=0.001
|
||||
MAX_MESSAGE_SIZE=100000
|
||||
MESSAGE_TIMEOUT=86400
|
||||
CHANNEL_TIMEOUT=2592000
|
||||
ENCRYPTION_ENABLED=true
|
||||
|
||||
# Security Configuration
|
||||
ENABLE_RATE_LIMITING=true
|
||||
ENABLE_WAF=true
|
||||
ENABLE_INTRUSION_DETECTION=true
|
||||
ENABLE_SECURITY_MONITORING=true
|
||||
LOG_LEVEL=info
|
||||
|
||||
# Performance Configuration
|
||||
ENABLE_CACHING=true
|
||||
CACHE_TTL=3600
|
||||
MAX_CONCURRENT_REQUESTS=1000
|
||||
REQUEST_TIMEOUT=30000
|
||||
|
||||
# Logging Configuration
|
||||
LOG_LEVEL=info
|
||||
LOG_FORMAT=json
|
||||
LOG_FILE=/var/log/aitbc/advanced-features.log
|
||||
LOG_MAX_SIZE=100MB
|
||||
LOG_MAX_FILES=10
|
||||
|
||||
# Health Check Configuration
|
||||
HEALTH_CHECK_INTERVAL=30
|
||||
HEALTH_CHECK_TIMEOUT=10
|
||||
HEALTH_CHECK_RETRIES=3
|
||||
|
||||
# Feature Flags
|
||||
ENABLE_CROSS_CHAIN_REPUTATION=true
|
||||
ENABLE_AGENT_COMMUNICATION=true
|
||||
ENABLE_AGENT_COLLABORATION=true
|
||||
ENABLE_ADVANCED_LEARNING=true
|
||||
ENABLE_AGENT_AUTONOMY=true
|
||||
ENABLE_MARKETPLACE_V2=true
|
||||
|
||||
# Development/Debug Configuration
|
||||
DEBUG=false
|
||||
VERBOSE=false
|
||||
ENABLE_PROFILING=false
|
||||
ENABLE_METRICS=true
|
||||
|
||||
# External Services
|
||||
NOTIFICATION_SERVICE_URL=https://api.aitbc.dev/notifications
|
||||
ANALYTICS_SERVICE_URL=https://api.aitbc.dev/analytics
|
||||
MONITORING_SERVICE_URL=https://monitoring.aitbc.dev
|
||||
|
||||
# SSL/TLS Configuration
|
||||
SSL_CERT_PATH=/etc/ssl/certs/aitbc.crt
|
||||
SSL_KEY_PATH=/etc/ssl/private/aitbc.key
|
||||
SSL_CA_PATH=/etc/ssl/certs/ca.crt
|
||||
|
||||
# Load Balancer Configuration
|
||||
LOAD_BALANCER_URL=https://loadbalancer.aitbc.dev
|
||||
LOAD_BALANCER_HEALTH_CHECK=/health
|
||||
LOAD_BALANCER_STICKY_SESSIONS=true
|
||||
|
||||
# Content Delivery Network
|
||||
CDN_URL=https://cdn.aitbc.dev
|
||||
CDN_CACHE_TTL=3600
|
||||
|
||||
# Email Configuration
|
||||
SMTP_HOST=smtp.gmail.com
|
||||
SMTP_PORT=587
|
||||
SMTP_USER=your_email@gmail.com
|
||||
SMTP_PASSWORD=your_email_password
|
||||
SMTP_FROM=noreply@aitbc.dev
|
||||
|
||||
# Analytics Configuration
|
||||
GOOGLE_ANALYTICS_ID=GA-XXXXXXXXX
|
||||
MIXPANEL_TOKEN=your_mixpanel_token_here
|
||||
SEGMENT_WRITE_KEY=your_segment_write_key_here
|
||||
|
||||
# Error Tracking
|
||||
SENTRY_DSN=your_sentry_dsn_here
|
||||
ROLLBAR_ACCESS_TOKEN=your_rollbar_token_here
|
||||
|
||||
# API Configuration
|
||||
API_VERSION=v1
|
||||
API_PREFIX=/api/v1/advanced
|
||||
API_DOCS_URL=https://docs.aitbc.dev/advanced-features
|
||||
|
||||
# Rate Limiting Configuration
|
||||
RATE_LIMIT_REQUESTS_PER_MINUTE=1000
|
||||
RATE_LIMIT_REQUESTS_PER_HOUR=50000
|
||||
RATE_LIMIT_REQUESTS_PER_DAY=1000000
|
||||
|
||||
# Cache Configuration
|
||||
REDIS_CACHE_TTL=3600
|
||||
MEMORY_CACHE_SIZE=1000
|
||||
CACHE_HIT_RATIO_TARGET=0.8
|
||||
|
||||
# Database Connection Pool
|
||||
DB_POOL_MIN=5
|
||||
DB_POOL_MAX=20
|
||||
DB_POOL_ACQUIRE_TIMEOUT=30000
|
||||
DB_POOL_IDLE_TIMEOUT=300000
|
||||
|
||||
# Session Configuration
|
||||
SESSION_SECRET=your_session_secret_here
|
||||
SESSION_TIMEOUT=3600
|
||||
SESSION_COOKIE_SECURE=true
|
||||
SESSION_COOKIE_HTTPONLY=true
|
||||
|
||||
# File Upload Configuration
|
||||
UPLOAD_MAX_SIZE=10485760
|
||||
UPLOAD_ALLOWED_TYPES=jpg,jpeg,png,gif,pdf,txt,csv
|
||||
UPLOAD_PATH=/var/uploads/aitbc
|
||||
|
||||
# WebSocket Configuration
|
||||
WEBSOCKET_PORT=8080
|
||||
WEBSOCKET_PATH=/ws
|
||||
WEBSOCKET_HEARTBEAT_INTERVAL=30
|
||||
|
||||
# Background Jobs
|
||||
JOBS_ENABLED=true
|
||||
JOBS_CONCURRENCY=10
|
||||
JOBS_TIMEOUT=300
|
||||
|
||||
# External Integrations
|
||||
IPFS_GATEWAY_URL=https://ipfs.io
|
||||
FILECOIN_API_KEY=your_filecoin_api_key_here
|
||||
PINATA_API_KEY=your_pinata_api_key_here
|
||||
|
||||
# Blockchain Configuration
|
||||
BLOCKCHAIN_PROVIDER=infura
|
||||
BLOCKCHAIN_NETWORK=mainnet
|
||||
BLOCKCHAIN_CONFIRMATIONS=12
|
||||
BLOCKCHAIN_TIMEOUT=300000
|
||||
|
||||
# Smart Contract Configuration
|
||||
CONTRACT_DEPLOYER=your_deployer_address
|
||||
CONTRACT_VERIFIER=your_verifier_address
|
||||
CONTRACT_GAS_BUFFER=1.1
|
||||
|
||||
# Testing Configuration
|
||||
TEST_MODE=false
|
||||
TEST_NETWORK=localhost
|
||||
TEST_MNEMONIC=test test test test test test test test test test test test
|
||||
|
||||
# Migration Configuration
|
||||
MIGRATIONS_PATH=./migrations
|
||||
MIGRATIONS_AUTO_RUN=false
|
||||
|
||||
# Maintenance Mode
|
||||
MAINTENANCE_MODE=false
|
||||
MAINTENANCE_MESSAGE="AITBC Advanced Agent Features is under maintenance"
|
||||
|
||||
# Feature Flags for Experimental Features
|
||||
EXPERIMENTAL_FEATURES=false
|
||||
BETA_FEATURES=true
|
||||
ALPHA_FEATURES=false
|
||||
|
||||
# Compliance Configuration
|
||||
GDPR_COMPLIANT=true
|
||||
CCPA_COMPLIANT=true
|
||||
DATA_RETENTION_DAYS=365
|
||||
|
||||
# Audit Configuration
|
||||
AUDIT_LOGGING=true
|
||||
AUDIT_RETENTION_DAYS=2555
|
||||
AUDIT_EXPORT_FORMAT=json
|
||||
|
||||
# Performance Monitoring
|
||||
APM_ENABLED=true
|
||||
APM_SERVICE_NAME=aitbc-advanced-features
|
||||
APM_ENVIRONMENT=production
|
||||
|
||||
# Security Headers
|
||||
SECURITY_HEADERS_ENABLED=true
|
||||
CSP_ENABLED=true
|
||||
HSTS_ENABLED=true
|
||||
X_FRAME_OPTIONS=DENY
|
||||
|
||||
# API Authentication
|
||||
API_KEY_REQUIRED=false
|
||||
API_KEY_HEADER=X-API-Key
|
||||
API_KEY_HEADER_VALUE=your_api_key_here
|
||||
|
||||
# Webhook Configuration
|
||||
WEBHOOK_SECRET=your_webhook_secret_here
|
||||
WEBHOOK_TIMEOUT=10000
|
||||
WEBHOOK_RETRY_ATTEMPTS=3
|
||||
|
||||
# Notification Configuration
|
||||
NOTIFICATION_ENABLED=true
|
||||
NOTIFICATION_CHANNELS=email,slack,discord
|
||||
NOTIFICATION_LEVELS=info,warning,error,critical
|
||||
|
||||
# Backup Configuration
|
||||
BACKUP_ENABLED=true
|
||||
BACKUP_SCHEDULE=daily
|
||||
BACKUP_RETENTION_DAYS=30
|
||||
BACKUP_ENCRYPTION=true
|
||||
|
||||
# Disaster Recovery
|
||||
DISASTER_RECOVERY_ENABLED=true
|
||||
DISASTER_RECOVERY_RTO=3600
|
||||
DISASTER_RECOVERY_RPO=3600
|
||||
|
||||
# Scaling Configuration
|
||||
AUTO_SCALING_ENABLED=true
|
||||
MIN_INSTANCES=2
|
||||
MAX_INSTANCES=10
|
||||
SCALE_UP_THRESHOLD=70
|
||||
SCALE_DOWN_THRESHOLD=30
|
||||
|
||||
# Health Check Endpoints
|
||||
HEALTH_CHECK_ENDPOINTS=/health,/ready,/metrics,/version
|
||||
HEALTH_CHECK_DEPENDENCIES=database,redis,blockchain
|
||||
|
||||
# Metrics Configuration
|
||||
METRICS_ENABLED=true
|
||||
METRICS_PORT=9090
|
||||
METRICS_PATH=/metrics
|
||||
|
||||
# Tracing Configuration
|
||||
TRACING_ENABLED=true
|
||||
TRACING_SAMPLE_RATE=0.1
|
||||
TRACING_EXPORTER=jaeger
|
||||
|
||||
# Documentation Configuration
|
||||
DOCS_ENABLED=true
|
||||
DOCS_URL=https://docs.aitbc.dev/advanced-features
|
||||
DOCS_VERSION=latest
|
||||
|
||||
# Support Configuration
|
||||
SUPPORT_EMAIL=support@aitbc.dev
|
||||
SUPPORT_PHONE=+1-555-123-4567
|
||||
SUPPORT_HOURS=24/7
|
||||
|
||||
# Legal Configuration
|
||||
PRIVACY_POLICY_URL=https://aitbc.dev/privacy
|
||||
TERMS_OF_SERVICE_URL=https://aitbc.dev/terms
|
||||
COOKIE_POLICY_URL=https://aitbc.dev/cookies
|
||||
@@ -1 +1 @@
|
||||
5d21312e467c438bbfcd035f2c65ba815ee326bf
|
||||
9153d888e5ca0923a494b5c849cffd15125abc46
|
||||
|
||||
@@ -6,7 +6,7 @@ edge_node_config:
|
||||
|
||||
services:
|
||||
- name: "marketplace-api"
|
||||
port: 8000
|
||||
port: 8002
|
||||
health_check: "/health/live"
|
||||
enabled: true
|
||||
- name: "cache-layer"
|
||||
|
||||
@@ -6,7 +6,7 @@ edge_node_config:
|
||||
|
||||
services:
|
||||
- name: "marketplace-api"
|
||||
port: 8000
|
||||
port: 8002
|
||||
health_check: "/health/live"
|
||||
enabled: true
|
||||
- name: "cache-layer"
|
||||
|
||||
@@ -6,7 +6,7 @@ edge_node_config:
|
||||
|
||||
services:
|
||||
- name: "marketplace-api"
|
||||
port: 8000
|
||||
port: 8002
|
||||
enabled: true
|
||||
health_check: "/health/live"
|
||||
|
||||
|
||||
@@ -23,7 +23,7 @@ rpc:
|
||||
bind_host: 0.0.0.0
|
||||
bind_port: 8080
|
||||
cors_origins:
|
||||
- http://localhost:8009
|
||||
- http://localhost:8015
|
||||
- http://localhost:8000
|
||||
rate_limit: 1000 # requests per minute
|
||||
```
|
||||
|
||||
@@ -44,12 +44,12 @@ This document provides comprehensive technical documentation for aitbc enhanced
|
||||
|
||||
**🔧 Systemd Services Updated:**
|
||||
```bash
|
||||
/etc/systemd/system/aitbc-multimodal-gpu.service # Port 8010
|
||||
/etc/systemd/system/aitbc-gpu.service # Port 8010
|
||||
/etc/systemd/system/aitbc-multimodal.service # Port 8011
|
||||
/etc/systemd/system/aitbc-modality-optimization.service # Port 8012
|
||||
/etc/systemd/system/aitbc-adaptive-learning.service # Port 8013
|
||||
/etc/systemd/system/aitbc-marketplace-enhanced.service # Port 8014
|
||||
/etc/systemd/system/aitbc-openclaw-enhanced.service # Port 8015
|
||||
/etc/systemd/system/aitbc-learning.service # Port 8013
|
||||
/etc/systemd/system/aitbc-marketplace.service # Port 8014
|
||||
/etc/systemd/system/aitbc-openclaw.service # Port 8015
|
||||
/etc/systemd/system/aitbc-web-ui.service # Port 8016
|
||||
```
|
||||
|
||||
@@ -62,7 +62,7 @@ This document provides comprehensive technical documentation for aitbc enhanced
|
||||
curl -s http://localhost:8010/health ✅ {"status":"ok","service":"gpu-multimodal","port":8010}
|
||||
curl -s http://localhost:8011/health ✅ {"status":"ok","service":"gpu-multimodal","port":8011}
|
||||
curl -s http://localhost:8012/health ✅ {"status":"ok","service":"modality-optimization","port":8012}
|
||||
curl -s http://localhost:8013/health ✅ {"status":"ok","service":"adaptive-learning","port":8013}
|
||||
curl -s http://localhost:8013/health ✅ {"status":"ok","service":"learning","port":8013}
|
||||
curl -s http://localhost:8016/health ✅ {"status":"ok","service":"web-ui","port":8016}
|
||||
```
|
||||
|
||||
@@ -156,7 +156,7 @@ sudo netstat -tlnp | grep -E ":(8010|8011|8012|8013|8014|8015|8016)"
|
||||
```json
|
||||
{
|
||||
"status": "ok",
|
||||
"service": "adaptive-learning",
|
||||
"service": "learning",
|
||||
"port": 8013,
|
||||
"learning_active": true,
|
||||
"learning_mode": "online",
|
||||
|
||||
@@ -119,7 +119,7 @@ cd /opt/aitbc/apps/coordinator-api
|
||||
|
||||
# Check individual service logs
|
||||
./manage_services.sh logs aitbc-multimodal
|
||||
./manage_services.sh logs aitbc-gpu-multimodal
|
||||
./manage_services.sh logs aitbc-gpu
|
||||
```
|
||||
|
||||
## 📊 Service Details
|
||||
@@ -197,7 +197,7 @@ curl -X POST http://localhost:8011/process \
|
||||
./manage_services.sh status
|
||||
|
||||
# View service logs
|
||||
./manage_services.sh logs aitbc-gpu-multimodal
|
||||
./manage_services.sh logs aitbc-gpu
|
||||
|
||||
# Enable auto-start
|
||||
./manage_services.sh enable
|
||||
@@ -234,10 +234,10 @@ df -h
|
||||
systemctl status aitbc-multimodal.service
|
||||
|
||||
# Audit service logs
|
||||
sudo journalctl -u aitbc-multimodal.service --since "1 hour ago"
|
||||
sudo journalctl -u aitbc-gpu.service --since "1 hour ago"
|
||||
|
||||
# Monitor resource usage
|
||||
systemctl status aitbc-gpu-multimodal.service --no-pager
|
||||
systemctl status aitbc-gpu.service --no-pager
|
||||
```
|
||||
|
||||
## 🐛 Troubleshooting
|
||||
@@ -283,10 +283,10 @@ sudo fuser -k 8010/tcp
|
||||
free -h
|
||||
|
||||
# Monitor service memory
|
||||
systemctl status aitbc-adaptive-learning.service --no-pager
|
||||
systemctl status aitbc-learning.service --no-pager
|
||||
|
||||
# Adjust memory limits
|
||||
systemctl edit aitbc-adaptive-learning.service
|
||||
systemctl edit aitbc-learning.service
|
||||
```
|
||||
|
||||
### Performance Optimization
|
||||
|
||||
@@ -64,7 +64,7 @@ Last updated: 2026-03-25
|
||||
| `systemd/aitbc-coordinator-api.service` | ✅ Active | Standardized coordinator API |
|
||||
| `systemd/aitbc-wallet.service` | ✅ Active | Fixed and standardized (Mar 2026) |
|
||||
| `systemd/aitbc-loadbalancer-geo.service` | ✅ Active | Fixed and standardized (Mar 2026) |
|
||||
| `systemd/aitbc-marketplace-enhanced.service` | ✅ Active | Fixed and standardized (Mar 2026) |
|
||||
| `systemd/aitbc-marketplace.service` | ✅ Active | Renamed from enhanced (Mar 2026) |
|
||||
|
||||
### Website (`website/`)
|
||||
|
||||
|
||||
@@ -348,7 +348,7 @@ ssh aitbc1-cascade # Direct SSH to aitbc1 container (incus)
|
||||
| GPU Multimodal | 8011 | python | 3.13.5 | /api/gpu-multimodal/* | ✅ (CPU-only) |
|
||||
| Modality Optimization | 8012 | python | 3.13.5 | /api/optimization/* | ✅ |
|
||||
| Adaptive Learning | 8013 | python | 3.13.5 | /api/learning/* | ✅ |
|
||||
| Marketplace Enhanced | 8014 | python | 3.13.5 | /api/marketplace-enhanced/* | ✅ |
|
||||
| Marketplace | 8014 | python | 3.13.5 | /api/marketplace/* | ✅ |
|
||||
| OpenClaw Enhanced | 8015 | python | 3.13.5 | /api/openclaw/* | ✅ |
|
||||
| Web UI | 8016 | python | 3.13.5 | /app/ | ✅ |
|
||||
| Geographic Load Balancer | 8017 | python | 3.13.5 | /api/loadbalancer/* | ✅ |
|
||||
|
||||
@@ -262,8 +262,8 @@ systemctl enable aitbc-multimodal-gpu.service
|
||||
systemctl enable aitbc-multimodal.service
|
||||
systemctl enable aitbc-modality-optimization.service
|
||||
systemctl enable aitbc-adaptive-learning.service
|
||||
systemctl enable aitbc-marketplace-enhanced.service
|
||||
systemctl enable aitbc-openclaw-enhanced.service
|
||||
systemctl enable aitbc-marketplace.service
|
||||
systemctl enable aitbc-openclaw.service
|
||||
systemctl enable aitbc-loadbalancer-geo.service
|
||||
```
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# AITBC Enhanced Services (8010-8016) Implementation Complete - March 4, 2026
|
||||
# AITBC Enhanced Services (8000-8023) Implementation Complete - March 30, 2026
|
||||
|
||||
## 🎯 Implementation Summary
|
||||
|
||||
@@ -9,34 +9,41 @@
|
||||
|
||||
### **✅ Enhanced Services Implemented:**
|
||||
|
||||
**🚀 Port 8010: Multimodal GPU Service**
|
||||
**🚀 Port 8007: Web UI Service**
|
||||
- **Status**: ✅ Running and responding
|
||||
- **Purpose**: GPU-accelerated multimodal processing
|
||||
- **Purpose**: Web interface for enhanced services
|
||||
- **Endpoint**: `http://localhost:8007/`
|
||||
- **Features**: HTML interface, service status dashboard
|
||||
|
||||
**🚀 Port 8010: GPU Service**
|
||||
- **Status**: ✅ Running and responding
|
||||
- **Purpose**: GPU-accelerated processing
|
||||
- **Endpoint**: `http://localhost:8010/health`
|
||||
- **Features**: GPU status monitoring, multimodal processing capabilities
|
||||
- **Features**: GPU status monitoring, processing capabilities
|
||||
|
||||
**🚀 Port 8011: GPU Multimodal Service**
|
||||
- **Status**: ✅ Running and responding
|
||||
- **Purpose**: Advanced GPU multimodal capabilities
|
||||
- **Endpoint**: `http://localhost:8011/health`
|
||||
- **Features**: Text, image, and audio processing
|
||||
|
||||
**🚀 Port 8012: Modality Optimization Service**
|
||||
- **Status**: ✅ Running and responding
|
||||
- **Purpose**: Optimization of different modalities
|
||||
- **Endpoint**: `http://localhost:8012/health`
|
||||
- **Features**: Modality optimization, high-performance processing
|
||||
|
||||
**🚀 Port 8013: Adaptive Learning Service**
|
||||
**🚀 Port 8011: Learning Service**
|
||||
- **Status**: ✅ Running and responding
|
||||
- **Purpose**: Machine learning and adaptation
|
||||
- **Endpoint**: `http://localhost:8013/health`
|
||||
- **Endpoint**: `http://localhost:8011/health`
|
||||
- **Features**: Online learning, model training, performance metrics
|
||||
|
||||
**🚀 Port 8014: Marketplace Enhanced Service**
|
||||
- **Status**: ✅ Updated (existing service)
|
||||
- **Purpose**: Enhanced marketplace functionality
|
||||
**🚀 Port 8012: Agent Coordinator**
|
||||
- **Status**: ✅ Running and responding
|
||||
- **Purpose**: Agent orchestration and coordination
|
||||
- **Endpoint**: `http://localhost:8012/health`
|
||||
- **Features**: Agent management, task assignment
|
||||
|
||||
**🚀 Port 8013: Agent Registry**
|
||||
- **Status**: ✅ Running and responding
|
||||
- **Purpose**: Agent registration and discovery
|
||||
- **Endpoint**: `http://localhost:8013/health`
|
||||
- **Features**: Agent registration, service discovery
|
||||
|
||||
**🚀 Port 8014: OpenClaw Service**
|
||||
- **Status**: ✅ Running and responding
|
||||
- **Purpose**: Edge computing and agent orchestration
|
||||
- **Endpoint**: `http://localhost:8014/health`
|
||||
- **Features**: Edge computing, agent management
|
||||
- **Features**: Advanced marketplace features, royalty management
|
||||
|
||||
**🚀 Port 8015: OpenClaw Enhanced Service**
|
||||
@@ -77,7 +84,7 @@
|
||||
/etc/systemd/system/aitbc-modality-optimization.service # Port 8012
|
||||
/etc/systemd/system/aitbc-adaptive-learning.service # Port 8013
|
||||
/etc/systemd/system/aitbc-marketplace-enhanced.service # Port 8014
|
||||
/etc/systemd/system/aitbc-openclaw-enhanced.service # Port 8015
|
||||
/etc/systemd/system/aitbc-openclaw.service # Port 8015
|
||||
/etc/systemd/system/aitbc-web-ui.service # Port 8016
|
||||
```
|
||||
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user