Merge aitbc1/blockchain-production into main: Enhanced chain sync, PoA transaction processing, and production genesis tooling
Some checks failed
AITBC CI/CD Pipeline / lint-and-test (3.11) (pull_request) Has been cancelled
AITBC CI/CD Pipeline / lint-and-test (3.12) (pull_request) Has been cancelled
AITBC CI/CD Pipeline / lint-and-test (3.13) (pull_request) Has been cancelled
AITBC CLI Level 1 Commands Test / test-cli-level1 (3.11) (pull_request) Has been cancelled
AITBC CLI Level 1 Commands Test / test-cli-level1 (3.12) (pull_request) Has been cancelled
AITBC CLI Level 1 Commands Test / test-cli-level1 (3.13) (pull_request) Has been cancelled
Security Scanning / Bandit Security Scan (apps/coordinator-api/src) (pull_request) Has been cancelled
Security Scanning / Bandit Security Scan (cli/aitbc_cli) (pull_request) Has been cancelled
Security Scanning / Bandit Security Scan (packages/py/aitbc-core/src) (pull_request) Has been cancelled
Security Scanning / Bandit Security Scan (packages/py/aitbc-crypto/src) (pull_request) Has been cancelled
Security Scanning / Bandit Security Scan (packages/py/aitbc-sdk/src) (pull_request) Has been cancelled
Security Scanning / Bandit Security Scan (tests) (pull_request) Has been cancelled
Security Scanning / CodeQL Security Analysis (javascript) (pull_request) Has been cancelled
Security Scanning / CodeQL Security Analysis (python) (pull_request) Has been cancelled
Security Scanning / Dependency Security Scan (pull_request) Has been cancelled
Security Scanning / Container Security Scan (pull_request) Has been cancelled
Security Scanning / OSSF Scorecard (pull_request) Has been cancelled
AITBC CI/CD Pipeline / test-cli (pull_request) Has been cancelled
AITBC CI/CD Pipeline / test-services (pull_request) Has been cancelled
AITBC CI/CD Pipeline / test-production-services (pull_request) Has been cancelled
AITBC CI/CD Pipeline / security-scan (pull_request) Has been cancelled
AITBC CI/CD Pipeline / build (pull_request) Has been cancelled
AITBC CI/CD Pipeline / deploy-staging (pull_request) Has been cancelled
AITBC CI/CD Pipeline / deploy-production (pull_request) Has been cancelled
AITBC CI/CD Pipeline / performance-test (pull_request) Has been cancelled
AITBC CI/CD Pipeline / docs (pull_request) Has been cancelled
AITBC CI/CD Pipeline / release (pull_request) Has been cancelled
AITBC CI/CD Pipeline / notify (pull_request) Has been cancelled
AITBC CLI Level 1 Commands Test / test-summary (pull_request) Has been cancelled
Security Scanning / Security Summary Report (pull_request) Has been cancelled
Some checks failed
AITBC CI/CD Pipeline / lint-and-test (3.11) (pull_request) Has been cancelled
AITBC CI/CD Pipeline / lint-and-test (3.12) (pull_request) Has been cancelled
AITBC CI/CD Pipeline / lint-and-test (3.13) (pull_request) Has been cancelled
AITBC CLI Level 1 Commands Test / test-cli-level1 (3.11) (pull_request) Has been cancelled
AITBC CLI Level 1 Commands Test / test-cli-level1 (3.12) (pull_request) Has been cancelled
AITBC CLI Level 1 Commands Test / test-cli-level1 (3.13) (pull_request) Has been cancelled
Security Scanning / Bandit Security Scan (apps/coordinator-api/src) (pull_request) Has been cancelled
Security Scanning / Bandit Security Scan (cli/aitbc_cli) (pull_request) Has been cancelled
Security Scanning / Bandit Security Scan (packages/py/aitbc-core/src) (pull_request) Has been cancelled
Security Scanning / Bandit Security Scan (packages/py/aitbc-crypto/src) (pull_request) Has been cancelled
Security Scanning / Bandit Security Scan (packages/py/aitbc-sdk/src) (pull_request) Has been cancelled
Security Scanning / Bandit Security Scan (tests) (pull_request) Has been cancelled
Security Scanning / CodeQL Security Analysis (javascript) (pull_request) Has been cancelled
Security Scanning / CodeQL Security Analysis (python) (pull_request) Has been cancelled
Security Scanning / Dependency Security Scan (pull_request) Has been cancelled
Security Scanning / Container Security Scan (pull_request) Has been cancelled
Security Scanning / OSSF Scorecard (pull_request) Has been cancelled
AITBC CI/CD Pipeline / test-cli (pull_request) Has been cancelled
AITBC CI/CD Pipeline / test-services (pull_request) Has been cancelled
AITBC CI/CD Pipeline / test-production-services (pull_request) Has been cancelled
AITBC CI/CD Pipeline / security-scan (pull_request) Has been cancelled
AITBC CI/CD Pipeline / build (pull_request) Has been cancelled
AITBC CI/CD Pipeline / deploy-staging (pull_request) Has been cancelled
AITBC CI/CD Pipeline / deploy-production (pull_request) Has been cancelled
AITBC CI/CD Pipeline / performance-test (pull_request) Has been cancelled
AITBC CI/CD Pipeline / docs (pull_request) Has been cancelled
AITBC CI/CD Pipeline / release (pull_request) Has been cancelled
AITBC CI/CD Pipeline / notify (pull_request) Has been cancelled
AITBC CLI Level 1 Commands Test / test-summary (pull_request) Has been cancelled
Security Scanning / Security Summary Report (pull_request) Has been cancelled
This commit is contained in:
113
.env.example
113
.env.example
@@ -1,63 +1,58 @@
|
||||
# AITBC Environment Configuration
|
||||
# SECURITY NOTICE: Use service-specific environment files
|
||||
#
|
||||
# For development, copy from:
|
||||
# config/environments/development/coordinator.env
|
||||
# config/environments/development/wallet-daemon.env
|
||||
#
|
||||
# For production, use AWS Secrets Manager and Kubernetes secrets
|
||||
# Templates available in config/environments/production/
|
||||
# AITBC Central Environment Example
|
||||
# SECURITY NOTICE: Use a secrets manager for production. Do not commit real secrets.
|
||||
# Run: python config/security/environment-audit.py --format text
|
||||
|
||||
# =============================================================================
|
||||
# BASIC CONFIGURATION ONLY
|
||||
# =============================================================================
|
||||
# Application Environment
|
||||
APP_ENV=development
|
||||
DEBUG=false
|
||||
LOG_LEVEL=INFO
|
||||
# =========================
|
||||
# Blockchain core
|
||||
# =========================
|
||||
chain_id=ait-mainnet
|
||||
supported_chains=ait-mainnet
|
||||
rpc_bind_host=0.0.0.0
|
||||
rpc_bind_port=8006
|
||||
p2p_bind_host=0.0.0.0
|
||||
p2p_bind_port=8005
|
||||
proposer_id=aitbc1genesis
|
||||
proposer_key=changeme_hex_private_key
|
||||
keystore_path=/opt/aitbc/keystore
|
||||
keystore_password_file=/opt/aitbc/keystore/.password
|
||||
gossip_backend=broadcast
|
||||
gossip_broadcast_url=redis://127.0.0.1:6379
|
||||
db_path=/opt/aitbc/apps/blockchain-node/data/ait-mainnet/chain.db
|
||||
mint_per_unit=0
|
||||
coordinator_ratio=0.05
|
||||
block_time_seconds=60
|
||||
enable_block_production=true
|
||||
|
||||
# =============================================================================
|
||||
# SECURITY REQUIREMENTS
|
||||
# =============================================================================
|
||||
# IMPORTANT: Do NOT store actual secrets in this file
|
||||
# Use AWS Secrets Manager for production
|
||||
# Generate secure keys with: openssl rand -hex 32
|
||||
# =========================
|
||||
# Coordinator API
|
||||
# =========================
|
||||
APP_ENV=production
|
||||
APP_HOST=127.0.0.1
|
||||
APP_PORT=8011
|
||||
DATABASE__URL=sqlite:///./data/coordinator.db
|
||||
BLOCKCHAIN_RPC_URL=http://127.0.0.1:8026
|
||||
ALLOW_ORIGINS=["http://localhost:8011","http://localhost:8000","http://8026"]
|
||||
JOB_TTL_SECONDS=900
|
||||
HEARTBEAT_INTERVAL_SECONDS=10
|
||||
HEARTBEAT_TIMEOUT_SECONDS=30
|
||||
RATE_LIMIT_REQUESTS=60
|
||||
RATE_LIMIT_WINDOW_SECONDS=60
|
||||
CLIENT_API_KEYS=["client_prod_key_use_real_value"]
|
||||
MINER_API_KEYS=["miner_prod_key_use_real_value"]
|
||||
ADMIN_API_KEYS=["admin_prod_key_use_real_value"]
|
||||
HMAC_SECRET=change_this_to_a_32_byte_random_secret
|
||||
JWT_SECRET=change_this_to_another_32_byte_random_secret
|
||||
|
||||
# =============================================================================
|
||||
# SERVICE CONFIGURATION
|
||||
# =============================================================================
|
||||
# Choose your service configuration:
|
||||
# 1. Copy service-specific .env file from config/environments/
|
||||
# 2. Fill in actual values (NEVER commit secrets)
|
||||
# 3. Run: python config/security/environment-audit.py
|
||||
# =========================
|
||||
# Marketplace Web
|
||||
# =========================
|
||||
VITE_MARKETPLACE_DATA_MODE=live
|
||||
VITE_MARKETPLACE_API=/api
|
||||
VITE_MARKETPLACE_ENABLE_BIDS=true
|
||||
VITE_MARKETPLACE_REQUIRE_AUTH=false
|
||||
|
||||
# =============================================================================
|
||||
# DEVELOPMENT QUICK START
|
||||
# =============================================================================
|
||||
# For quick development setup:
|
||||
# cp config/environments/development/coordinator.env .env
|
||||
# cp config/environments/development/wallet-daemon.env .env.wallet
|
||||
#
|
||||
# Then edit the copied files with your values
|
||||
|
||||
# =============================================================================
|
||||
# PRODUCTION DEPLOYMENT
|
||||
# =============================================================================
|
||||
# For production deployment:
|
||||
# 1. Use AWS Secrets Manager for all sensitive values
|
||||
# 2. Reference secrets as: secretRef:secret-name:key
|
||||
# 3. Run security audit before deployment
|
||||
# 4. Use templates in config/environments/production/
|
||||
|
||||
# =============================================================================
|
||||
# SECURITY VALIDATION
|
||||
# =============================================================================
|
||||
# Validate your configuration:
|
||||
# python config/security/environment-audit.py --format text
|
||||
|
||||
# =============================================================================
|
||||
# FOR MORE INFORMATION
|
||||
# =============================================================================
|
||||
# See: config/security/secret-validation.yaml
|
||||
# See: config/security/environment-audit.py
|
||||
# See: config/environments/ directory
|
||||
# =========================
|
||||
# Notes
|
||||
# =========================
|
||||
# For production: move secrets to a secrets manager and reference via secretRef
|
||||
# Validate config: python config/security/environment-audit.py --format text
|
||||
|
||||
@@ -32,6 +32,9 @@ class RateLimitMiddleware(BaseHTTPMiddleware):
|
||||
|
||||
async def dispatch(self, request: Request, call_next):
|
||||
client_ip = request.client.host if request.client else "unknown"
|
||||
# Bypass rate limiting for localhost (sync/health internal traffic)
|
||||
if client_ip in {"127.0.0.1", "::1"}:
|
||||
return await call_next(request)
|
||||
now = time.time()
|
||||
# Clean old entries
|
||||
self._requests[client_ip] = [
|
||||
@@ -109,7 +112,8 @@ def create_app() -> FastAPI:
|
||||
|
||||
# Middleware (applied in reverse order)
|
||||
app.add_middleware(RequestLoggingMiddleware)
|
||||
app.add_middleware(RateLimitMiddleware, max_requests=200, window_seconds=60)
|
||||
# Allow higher RPC throughput (sync + node traffic)
|
||||
app.add_middleware(RateLimitMiddleware, max_requests=5000, window_seconds=60)
|
||||
app.add_middleware(
|
||||
CORSMiddleware,
|
||||
allow_origins=[
|
||||
|
||||
@@ -13,12 +13,20 @@ from typing import Dict, Any, Optional, List
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class ChainSyncService:
|
||||
def __init__(self, redis_url: str, node_id: str, rpc_port: int = 8006):
|
||||
def __init__(self, redis_url: str, node_id: str, rpc_port: int = 8006, leader_host: str = None,
|
||||
source_host: str = "127.0.0.1", source_port: int = None,
|
||||
import_host: str = "127.0.0.1", import_port: int = None):
|
||||
self.redis_url = redis_url
|
||||
self.node_id = node_id
|
||||
self.rpc_port = rpc_port
|
||||
self.rpc_port = rpc_port # kept for backward compat (local poll if source_port None)
|
||||
self.leader_host = leader_host # Host of the leader node (legacy)
|
||||
self.source_host = source_host
|
||||
self.source_port = source_port or rpc_port
|
||||
self.import_host = import_host
|
||||
self.import_port = import_port or rpc_port
|
||||
self._stop_event = asyncio.Event()
|
||||
self._redis = None
|
||||
self._receiver_ready = asyncio.Event()
|
||||
|
||||
async def start(self):
|
||||
"""Start chain synchronization service"""
|
||||
@@ -34,10 +42,11 @@ class ChainSyncService:
|
||||
return
|
||||
|
||||
# Start block broadcasting task
|
||||
broadcast_task = asyncio.create_task(self._broadcast_blocks())
|
||||
|
||||
# Start block receiving task
|
||||
receive_task = asyncio.create_task(self._receive_blocks())
|
||||
# Wait until receiver subscribed so we don't drop the initial burst
|
||||
await self._receiver_ready.wait()
|
||||
broadcast_task = asyncio.create_task(self._broadcast_blocks())
|
||||
|
||||
try:
|
||||
await self._stop_event.wait()
|
||||
@@ -59,16 +68,22 @@ class ChainSyncService:
|
||||
import aiohttp
|
||||
|
||||
last_broadcast_height = 0
|
||||
retry_count = 0
|
||||
max_retries = 5
|
||||
base_delay = 2
|
||||
|
||||
while not self._stop_event.is_set():
|
||||
try:
|
||||
# Get current head from local RPC
|
||||
async with aiohttp.ClientSession() as session:
|
||||
async with session.get(f"http://127.0.0.1:{self.rpc_port}/rpc/head") as resp:
|
||||
async with session.get(f"http://{self.source_host}:{self.source_port}/rpc/head") as resp:
|
||||
if resp.status == 200:
|
||||
head_data = await resp.json()
|
||||
current_height = head_data.get('height', 0)
|
||||
|
||||
# Reset retry count on successful connection
|
||||
retry_count = 0
|
||||
|
||||
# Broadcast new blocks
|
||||
if current_height > last_broadcast_height:
|
||||
for height in range(last_broadcast_height + 1, current_height + 1):
|
||||
@@ -78,11 +93,30 @@ class ChainSyncService:
|
||||
|
||||
last_broadcast_height = current_height
|
||||
logger.info(f"Broadcasted blocks up to height {current_height}")
|
||||
elif resp.status == 429:
|
||||
raise Exception("rate_limit")
|
||||
else:
|
||||
raise Exception(f"RPC returned status {resp.status}")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error in block broadcast: {e}")
|
||||
retry_count += 1
|
||||
# If rate-limited, wait longer before retrying
|
||||
if str(e) == "rate_limit":
|
||||
delay = base_delay * 30
|
||||
logger.warning(f"RPC rate limited, retrying in {delay}s")
|
||||
await asyncio.sleep(delay)
|
||||
continue
|
||||
if retry_count <= max_retries:
|
||||
delay = base_delay * (2 ** (retry_count - 1)) # Exponential backoff
|
||||
logger.warning(f"RPC connection failed (attempt {retry_count}/{max_retries}), retrying in {delay}s: {e}")
|
||||
await asyncio.sleep(delay)
|
||||
continue
|
||||
else:
|
||||
logger.error(f"RPC connection failed after {max_retries} attempts, waiting {base_delay * 10}s: {e}")
|
||||
await asyncio.sleep(base_delay * 10)
|
||||
retry_count = 0 # Reset retry count after long wait
|
||||
|
||||
await asyncio.sleep(2) # Check every 2 seconds
|
||||
await asyncio.sleep(base_delay) # Check every 2 seconds when connected
|
||||
|
||||
async def _receive_blocks(self):
|
||||
"""Receive blocks from other nodes via Redis"""
|
||||
@@ -91,6 +125,7 @@ class ChainSyncService:
|
||||
|
||||
pubsub = self._redis.pubsub()
|
||||
await pubsub.subscribe("blocks")
|
||||
self._receiver_ready.set()
|
||||
|
||||
logger.info("Subscribed to block broadcasts")
|
||||
|
||||
@@ -108,11 +143,12 @@ class ChainSyncService:
|
||||
async def _get_block_by_height(self, height: int, session) -> Optional[Dict[str, Any]]:
|
||||
"""Get block data by height from local RPC"""
|
||||
try:
|
||||
async with session.get(f"http://127.0.0.1:{self.rpc_port}/rpc/blocks?start={height}&end={height}") as resp:
|
||||
async with session.get(f"http://{self.source_host}:{self.source_port}/rpc/blocks-range?start={height}&end={height}") as resp:
|
||||
if resp.status == 200:
|
||||
blocks_data = await resp.json()
|
||||
blocks = blocks_data.get('blocks', [])
|
||||
return blocks[0] if blocks else None
|
||||
block = blocks[0] if blocks else None
|
||||
return block
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting block {height}: {e}")
|
||||
return None
|
||||
@@ -137,26 +173,68 @@ class ChainSyncService:
|
||||
if block_data.get('proposer') == self.node_id:
|
||||
return
|
||||
|
||||
async with aiohttp.ClientSession() as session:
|
||||
async with session.post(
|
||||
f"http://127.0.0.1:{self.rpc_port}/rpc/importBlock",
|
||||
json=block_data
|
||||
) as resp:
|
||||
if resp.status == 200:
|
||||
result = await resp.json()
|
||||
if result.get('accepted'):
|
||||
logger.info(f"Imported block {block_data.get('height')} from {block_data.get('proposer')}")
|
||||
else:
|
||||
logger.debug(f"Rejected block {block_data.get('height')}: {result.get('reason')}")
|
||||
# Determine target host - if we're a follower, import to leader, else import locally
|
||||
target_host = self.import_host
|
||||
target_port = self.import_port
|
||||
|
||||
# Retry logic for import
|
||||
max_retries = 3
|
||||
base_delay = 1
|
||||
|
||||
for attempt in range(max_retries):
|
||||
try:
|
||||
async with aiohttp.ClientSession() as session:
|
||||
async with session.post(
|
||||
f"http://{target_host}:{target_port}/rpc/importBlock",
|
||||
json=block_data
|
||||
) as resp:
|
||||
if resp.status == 200:
|
||||
result = await resp.json()
|
||||
if result.get('accepted'):
|
||||
logger.info(f"Imported block {block_data.get('height')} from {block_data.get('proposer')}")
|
||||
else:
|
||||
logger.debug(f"Rejected block {block_data.get('height')}: {result.get('reason')}")
|
||||
return
|
||||
else:
|
||||
try:
|
||||
body = await resp.text()
|
||||
except Exception:
|
||||
body = "<no body>"
|
||||
raise Exception(f"HTTP {resp.status}: {body}")
|
||||
|
||||
except Exception as e:
|
||||
if attempt < max_retries - 1:
|
||||
delay = base_delay * (2 ** attempt)
|
||||
logger.warning(f"Import failed (attempt {attempt + 1}/{max_retries}), retrying in {delay}s: {e}")
|
||||
await asyncio.sleep(delay)
|
||||
else:
|
||||
logger.warning(f"Failed to import block: {resp.status}")
|
||||
logger.error(f"Failed to import block {block_data.get('height')} after {max_retries} attempts: {e}")
|
||||
return
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error importing block: {e}")
|
||||
|
||||
async def run_chain_sync(redis_url: str, node_id: str, rpc_port: int = 8006):
|
||||
async def run_chain_sync(
|
||||
redis_url: str,
|
||||
node_id: str,
|
||||
rpc_port: int = 8006,
|
||||
leader_host: str = None,
|
||||
source_host: str = "127.0.0.1",
|
||||
source_port: int = None,
|
||||
import_host: str = "127.0.0.1",
|
||||
import_port: int = None,
|
||||
):
|
||||
"""Run chain synchronization service"""
|
||||
service = ChainSyncService(redis_url, node_id, rpc_port)
|
||||
service = ChainSyncService(
|
||||
redis_url=redis_url,
|
||||
node_id=node_id,
|
||||
rpc_port=rpc_port,
|
||||
leader_host=leader_host,
|
||||
source_host=source_host,
|
||||
source_port=source_port,
|
||||
import_host=import_host,
|
||||
import_port=import_port,
|
||||
)
|
||||
await service.start()
|
||||
|
||||
def main():
|
||||
@@ -166,13 +244,27 @@ def main():
|
||||
parser.add_argument("--redis", default="redis://localhost:6379", help="Redis URL")
|
||||
parser.add_argument("--node-id", required=True, help="Node identifier")
|
||||
parser.add_argument("--rpc-port", type=int, default=8006, help="RPC port")
|
||||
parser.add_argument("--leader-host", help="Leader node host (for followers)")
|
||||
parser.add_argument("--source-host", default="127.0.0.1", help="Host to poll for head/blocks")
|
||||
parser.add_argument("--source-port", type=int, help="Port to poll for head/blocks")
|
||||
parser.add_argument("--import-host", default="127.0.0.1", help="Host to import blocks into")
|
||||
parser.add_argument("--import-port", type=int, help="Port to import blocks into")
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
logging.basicConfig(level=logging.INFO)
|
||||
|
||||
try:
|
||||
asyncio.run(run_chain_sync(args.redis, args.node_id, args.rpc_port))
|
||||
asyncio.run(run_chain_sync(
|
||||
args.redis,
|
||||
args.node_id,
|
||||
args.rpc_port,
|
||||
args.leader_host,
|
||||
args.source_host,
|
||||
args.source_port,
|
||||
args.import_host,
|
||||
args.import_port,
|
||||
))
|
||||
except KeyboardInterrupt:
|
||||
logger.info("Chain sync service stopped by user")
|
||||
|
||||
|
||||
94
apps/blockchain-node/src/aitbc_chain/combined_main.py
Normal file
94
apps/blockchain-node/src/aitbc_chain/combined_main.py
Normal file
@@ -0,0 +1,94 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Combined blockchain node and P2P service launcher
|
||||
Runs both the main blockchain node and P2P placeholder service
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import logging
|
||||
import os
|
||||
import signal
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
# Add src to path
|
||||
sys.path.insert(0, str(Path(__file__).parent.parent))
|
||||
|
||||
from aitbc_chain.main import BlockchainNode, _run as run_node
|
||||
from aitbc_chain.config import settings
|
||||
|
||||
logging.basicConfig(level=logging.INFO)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class CombinedService:
|
||||
def __init__(self):
|
||||
self._stop_event = asyncio.Event()
|
||||
self._tasks = []
|
||||
self._loop = None
|
||||
|
||||
def set_stop_event(self):
|
||||
"""Set the stop event to trigger shutdown"""
|
||||
if self._stop_event and not self._stop_event.is_set():
|
||||
self._stop_event.set()
|
||||
|
||||
async def start(self):
|
||||
"""Start both blockchain node and P2P server"""
|
||||
self._loop = asyncio.get_running_loop()
|
||||
logger.info("Starting combined blockchain service")
|
||||
|
||||
# Start blockchain node in background
|
||||
node_task = asyncio.create_task(run_node())
|
||||
self._tasks.append(node_task)
|
||||
|
||||
logger.info(f"Combined service started - Node on mainnet")
|
||||
|
||||
try:
|
||||
await self._stop_event.wait()
|
||||
finally:
|
||||
await self.stop()
|
||||
|
||||
async def stop(self):
|
||||
"""Stop all services"""
|
||||
logger.info("Stopping combined blockchain service")
|
||||
|
||||
# Cancel all tasks
|
||||
for task in self._tasks:
|
||||
task.cancel()
|
||||
|
||||
# Wait for tasks to complete
|
||||
if self._tasks:
|
||||
await asyncio.gather(*self._tasks, return_exceptions=True)
|
||||
|
||||
self._tasks.clear()
|
||||
logger.info("Combined service stopped")
|
||||
|
||||
# Global service instance for signal handler
|
||||
_service_instance = None
|
||||
|
||||
def signal_handler(signum, frame):
|
||||
"""Handle shutdown signals"""
|
||||
logger.info(f"Received signal {signum}, initiating shutdown")
|
||||
global _service_instance
|
||||
if _service_instance:
|
||||
_service_instance.set_stop_event()
|
||||
|
||||
async def main():
|
||||
"""Main entry point"""
|
||||
global _service_instance
|
||||
service = CombinedService()
|
||||
_service_instance = service
|
||||
|
||||
# Set up signal handlers
|
||||
signal.signal(signal.SIGTERM, signal_handler)
|
||||
signal.signal(signal.SIGINT, signal_handler)
|
||||
|
||||
try:
|
||||
await service.start()
|
||||
except KeyboardInterrupt:
|
||||
logger.info("Received keyboard interrupt")
|
||||
finally:
|
||||
await service.stop()
|
||||
_service_instance = None
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
@@ -36,6 +36,9 @@ class ChainSettings(BaseSettings):
|
||||
|
||||
block_time_seconds: int = 2
|
||||
|
||||
# Block production toggle (set false on followers)
|
||||
enable_block_production: bool = True
|
||||
|
||||
# Block production limits
|
||||
max_block_size_bytes: int = 1_000_000 # 1 MB
|
||||
max_txs_per_block: int = 500
|
||||
|
||||
@@ -120,12 +120,11 @@ class PoAProposer:
|
||||
return
|
||||
|
||||
async def _propose_block(self) -> None:
|
||||
# Check internal mempool - but produce empty blocks to keep chain moving
|
||||
# Check internal mempool and include transactions
|
||||
from ..mempool import get_mempool
|
||||
mempool_size = get_mempool().size(self._config.chain_id)
|
||||
if mempool_size == 0:
|
||||
self._logger.debug("No transactions in mempool, producing empty block")
|
||||
|
||||
from ..models import Transaction, Account
|
||||
mempool = get_mempool()
|
||||
|
||||
with self._session_factory() as session:
|
||||
head = session.exec(select(Block).where(Block.chain_id == self._config.chain_id).order_by(Block.height.desc()).limit(1)).first()
|
||||
next_height = 0
|
||||
@@ -137,7 +136,71 @@ class PoAProposer:
|
||||
interval_seconds = (datetime.utcnow() - head.timestamp).total_seconds()
|
||||
|
||||
timestamp = datetime.utcnow()
|
||||
block_hash = self._compute_block_hash(next_height, parent_hash, timestamp)
|
||||
|
||||
# Pull transactions from mempool
|
||||
max_txs = self._config.max_txs_per_block
|
||||
max_bytes = self._config.max_block_size_bytes
|
||||
pending_txs = mempool.drain(max_txs, max_bytes, self._config.chain_id)
|
||||
self._logger.info(f"[PROPOSE] drained {len(pending_txs)} txs from mempool, chain={self._config.chain_id}")
|
||||
|
||||
# Process transactions and update balances
|
||||
processed_txs = []
|
||||
for tx in pending_txs:
|
||||
try:
|
||||
# Parse transaction data
|
||||
tx_data = tx.content
|
||||
sender = tx_data.get("sender")
|
||||
recipient = tx_data.get("payload", {}).get("to")
|
||||
value = tx_data.get("payload", {}).get("value", 0)
|
||||
fee = tx_data.get("fee", 0)
|
||||
|
||||
if not sender or not recipient:
|
||||
continue
|
||||
|
||||
# Get sender account
|
||||
sender_account = session.get(Account, (self._config.chain_id, sender))
|
||||
if not sender_account:
|
||||
continue
|
||||
|
||||
# Check sufficient balance
|
||||
total_cost = value + fee
|
||||
if sender_account.balance < total_cost:
|
||||
continue
|
||||
|
||||
# Get or create recipient account
|
||||
recipient_account = session.get(Account, (self._config.chain_id, recipient))
|
||||
if not recipient_account:
|
||||
recipient_account = Account(chain_id=self._config.chain_id, address=recipient, balance=0, nonce=0)
|
||||
session.add(recipient_account)
|
||||
session.flush()
|
||||
|
||||
# Update balances
|
||||
sender_account.balance -= total_cost
|
||||
sender_account.nonce += 1
|
||||
recipient_account.balance += value
|
||||
|
||||
# Create transaction record
|
||||
transaction = Transaction(
|
||||
chain_id=self._config.chain_id,
|
||||
tx_hash=tx.tx_hash,
|
||||
sender=sender,
|
||||
recipient=recipient,
|
||||
value=value,
|
||||
fee=fee,
|
||||
nonce=sender_account.nonce - 1,
|
||||
timestamp=timestamp,
|
||||
block_height=next_height,
|
||||
status="confirmed"
|
||||
)
|
||||
session.add(transaction)
|
||||
processed_txs.append(tx)
|
||||
|
||||
except Exception as e:
|
||||
self._logger.warning(f"Failed to process transaction {tx.tx_hash}: {e}")
|
||||
continue
|
||||
|
||||
# Compute block hash with transaction data
|
||||
block_hash = self._compute_block_hash(next_height, parent_hash, timestamp, processed_txs)
|
||||
|
||||
block = Block(
|
||||
chain_id=self._config.chain_id,
|
||||
@@ -146,7 +209,7 @@ class PoAProposer:
|
||||
parent_hash=parent_hash,
|
||||
proposer=self._config.proposer_id,
|
||||
timestamp=timestamp,
|
||||
tx_count=0,
|
||||
tx_count=len(processed_txs),
|
||||
state_root=None,
|
||||
)
|
||||
session.add(block)
|
||||
@@ -259,6 +322,11 @@ class PoAProposer:
|
||||
with self._session_factory() as session:
|
||||
return session.exec(select(Block).order_by(Block.height.desc()).limit(1)).first()
|
||||
|
||||
def _compute_block_hash(self, height: int, parent_hash: str, timestamp: datetime) -> str:
|
||||
payload = f"{self._config.chain_id}|{height}|{parent_hash}|{timestamp.isoformat()}".encode()
|
||||
def _compute_block_hash(self, height: int, parent_hash: str, timestamp: datetime, transactions: list = None) -> str:
|
||||
# Include transaction hashes in block hash computation
|
||||
tx_hashes = []
|
||||
if transactions:
|
||||
tx_hashes = [tx.tx_hash for tx in transactions]
|
||||
|
||||
payload = f"{self._config.chain_id}|{height}|{parent_hash}|{timestamp.isoformat()}|{'|'.join(sorted(tx_hashes))}".encode()
|
||||
return "0x" + hashlib.sha256(payload).hexdigest()
|
||||
|
||||
@@ -148,7 +148,11 @@ class BlockchainNode:
|
||||
max_size=settings.mempool_max_size,
|
||||
min_fee=settings.min_fee,
|
||||
)
|
||||
self._start_proposers()
|
||||
# Start proposers only if enabled (followers set enable_block_production=False)
|
||||
if getattr(settings, "enable_block_production", True):
|
||||
self._start_proposers()
|
||||
else:
|
||||
logger.info("Block production disabled on this node", extra={"proposer_id": settings.proposer_id})
|
||||
await self._setup_gossip_subscribers()
|
||||
try:
|
||||
await self._stop_event.wait()
|
||||
|
||||
@@ -110,6 +110,7 @@ async def get_block(height: int) -> Dict[str, Any]:
|
||||
"height": block.height,
|
||||
"hash": block.hash,
|
||||
"parent_hash": block.parent_hash,
|
||||
"proposer": block.proposer,
|
||||
"timestamp": block.timestamp.isoformat(),
|
||||
"tx_count": block.tx_count,
|
||||
"state_root": block.state_root,
|
||||
@@ -154,6 +155,7 @@ async def get_blocks_range(start: int, end: int) -> Dict[str, Any]:
|
||||
"height": block.height,
|
||||
"hash": block.hash,
|
||||
"parent_hash": block.parent_hash,
|
||||
"proposer": block.proposer,
|
||||
"timestamp": block.timestamp.isoformat(),
|
||||
"tx_count": block.tx_count,
|
||||
"state_root": block.state_root,
|
||||
@@ -312,6 +314,7 @@ async def get_receipts(limit: int = 20, offset: int = 0) -> Dict[str, Any]:
|
||||
|
||||
@router.get("/getBalance/{address}", summary="Get account balance")
|
||||
async def get_balance(address: str, chain_id: str = None) -> Dict[str, Any]:
|
||||
chain_id = get_chain_id(chain_id)
|
||||
metrics_registry.increment("rpc_get_balance_total")
|
||||
start = time.perf_counter()
|
||||
with session_scope() as session:
|
||||
@@ -652,34 +655,37 @@ async def get_blockchain_info(chain_id: str = None) -> Dict[str, Any]:
|
||||
async def get_token_supply(chain_id: str = None) -> Dict[str, Any]:
|
||||
"""Get token supply information"""
|
||||
from ..config import settings as cfg
|
||||
from ..models import Account
|
||||
|
||||
# Use default chain_id from settings if not provided
|
||||
if chain_id is None:
|
||||
chain_id = cfg.chain_id
|
||||
|
||||
chain_id = get_chain_id(chain_id)
|
||||
metrics_registry.increment("rpc_supply_total")
|
||||
start = time.perf_counter()
|
||||
|
||||
with session_scope() as session:
|
||||
# Production implementation - no faucet in mainnet
|
||||
# Calculate actual values from database
|
||||
accounts = session.exec(select(Account).where(Account.chain_id == chain_id)).all()
|
||||
total_balance = sum(account.balance for account in accounts)
|
||||
total_accounts = len(accounts)
|
||||
|
||||
# Production implementation - calculate real circulating supply
|
||||
if chain_id == "ait-mainnet":
|
||||
response = {
|
||||
"chain_id": chain_id,
|
||||
"total_supply": 1000000000, # 1 billion from genesis
|
||||
"circulating_supply": 0, # No transactions yet
|
||||
"circulating_supply": total_balance, # Actual tokens in circulation
|
||||
"mint_per_unit": cfg.mint_per_unit,
|
||||
"total_accounts": 0
|
||||
"total_accounts": total_accounts # Actual account count
|
||||
}
|
||||
else:
|
||||
# Devnet with faucet
|
||||
# Devnet with faucet - use actual calculations
|
||||
response = {
|
||||
"chain_id": chain_id,
|
||||
"total_supply": 1000000000, # 1 billion from genesis
|
||||
"circulating_supply": 0, # No transactions yet
|
||||
"circulating_supply": total_balance, # Actual tokens in circulation
|
||||
"faucet_balance": 1000000000, # All tokens in faucet
|
||||
"faucet_address": "ait1faucet000000000000000000000000000000000",
|
||||
"mint_per_unit": cfg.mint_per_unit,
|
||||
"total_accounts": 0
|
||||
"total_accounts": total_accounts # Actual account count
|
||||
}
|
||||
|
||||
metrics_registry.observe("rpc_supply_duration_seconds", time.perf_counter() - start)
|
||||
|
||||
@@ -14,7 +14,7 @@ from sqlmodel import Session, select
|
||||
from .config import settings
|
||||
from .logger import get_logger
|
||||
from .metrics import metrics_registry
|
||||
from .models import Block, Transaction
|
||||
from .models import Block, Transaction, Account
|
||||
|
||||
logger = get_logger(__name__)
|
||||
|
||||
@@ -202,15 +202,45 @@ class ChainSync:
|
||||
)
|
||||
session.add(block)
|
||||
|
||||
# Import transactions if provided
|
||||
# Import transactions if provided and apply state changes
|
||||
if transactions:
|
||||
for tx_data in transactions:
|
||||
sender_addr = tx_data.get("sender", "")
|
||||
payload = tx_data.get("payload", {}) or {}
|
||||
recipient_addr = payload.get("to") or tx_data.get("recipient", "")
|
||||
value = int(payload.get("value", 0) or 0)
|
||||
fee = int(tx_data.get("fee", 0) or 0)
|
||||
tx_hash = tx_data.get("tx_hash", "")
|
||||
|
||||
# Upsert sender/recipient accounts
|
||||
sender_acct = session.get(Account, (self._chain_id, sender_addr))
|
||||
if sender_acct is None:
|
||||
sender_acct = Account(chain_id=self._chain_id, address=sender_addr, balance=0, nonce=0)
|
||||
session.add(sender_acct)
|
||||
session.flush()
|
||||
|
||||
recipient_acct = session.get(Account, (self._chain_id, recipient_addr))
|
||||
if recipient_acct is None:
|
||||
recipient_acct = Account(chain_id=self._chain_id, address=recipient_addr, balance=0, nonce=0)
|
||||
session.add(recipient_acct)
|
||||
session.flush()
|
||||
|
||||
# Apply balances/nonce; assume block validity already verified on producer
|
||||
total_cost = value + fee
|
||||
sender_acct.balance -= total_cost
|
||||
tx_nonce = tx_data.get("nonce")
|
||||
if tx_nonce is not None:
|
||||
sender_acct.nonce = max(sender_acct.nonce, int(tx_nonce) + 1)
|
||||
else:
|
||||
sender_acct.nonce += 1
|
||||
recipient_acct.balance += value
|
||||
|
||||
tx = Transaction(
|
||||
chain_id=self._chain_id,
|
||||
tx_hash=tx_data.get("tx_hash", ""),
|
||||
tx_hash=tx_hash,
|
||||
block_height=block_data["height"],
|
||||
sender=tx_data.get("sender", ""),
|
||||
recipient=tx_data.get("recipient", ""),
|
||||
sender=sender_addr,
|
||||
recipient=recipient_addr,
|
||||
payload=tx_data,
|
||||
)
|
||||
session.add(tx)
|
||||
|
||||
@@ -9,12 +9,57 @@ from ..core.genesis_generator import GenesisGenerator, GenesisValidationError
|
||||
from ..core.config import MultiChainConfig, load_multichain_config
|
||||
from ..models.chain import GenesisConfig
|
||||
from ..utils import output, error, success
|
||||
from .keystore import create_keystore_via_script
|
||||
import subprocess
|
||||
import sys
|
||||
|
||||
@click.group()
|
||||
def genesis():
|
||||
"""Genesis block generation and management commands"""
|
||||
pass
|
||||
|
||||
|
||||
@genesis.command()
|
||||
@click.option('--address', required=True, help='Wallet address (id) to create')
|
||||
@click.option('--password-file', default='/opt/aitbc/data/keystore/.password', show_default=True, type=click.Path(exists=True, dir_okay=False), help='Path to password file')
|
||||
@click.option('--output-dir', default='/opt/aitbc/data/keystore', show_default=True, help='Directory to write keystore file')
|
||||
@click.option('--force', is_flag=True, help='Overwrite existing keystore file if present')
|
||||
@click.pass_context
|
||||
def create_keystore(ctx, address, password_file, output_dir, force):
|
||||
"""Create an encrypted keystore for a genesis/treasury address."""
|
||||
try:
|
||||
create_keystore_via_script(address=address, password_file=password_file, output_dir=output_dir, force=force)
|
||||
success(f"Created keystore for {address} at {output_dir}")
|
||||
except Exception as e:
|
||||
error(f"Error creating keystore: {e}")
|
||||
raise click.Abort()
|
||||
|
||||
|
||||
@genesis.command(name="init-production")
|
||||
@click.option('--chain-id', default='ait-mainnet', show_default=True, help='Chain ID to initialize')
|
||||
@click.option('--genesis-file', default='data/genesis_prod.yaml', show_default=True, help='Path to genesis YAML (copy to /opt/aitbc/genesis_prod.yaml if needed)')
|
||||
@click.option('--db', default='/opt/aitbc/data/ait-mainnet/chain.db', show_default=True, help='SQLite DB path')
|
||||
@click.option('--force', is_flag=True, help='Overwrite existing DB (removes file if present)')
|
||||
@click.pass_context
|
||||
def init_production(ctx, chain_id, genesis_file, db, force):
|
||||
"""Initialize production chain DB using genesis allocations."""
|
||||
db_path = Path(db)
|
||||
if db_path.exists() and force:
|
||||
db_path.unlink()
|
||||
python_bin = Path(__file__).resolve().parents[3] / 'apps' / 'blockchain-node' / '.venv' / 'bin' / 'python3'
|
||||
cmd = [
|
||||
str(python_bin),
|
||||
str(Path(__file__).resolve().parents[3] / 'scripts' / 'init_production_genesis.py'),
|
||||
'--chain-id', chain_id,
|
||||
'--db', db,
|
||||
]
|
||||
try:
|
||||
subprocess.run(cmd, check=True)
|
||||
success(f"Initialized production genesis for {chain_id} at {db}")
|
||||
except subprocess.CalledProcessError as e:
|
||||
error(f"Genesis init failed: {e}")
|
||||
raise click.Abort()
|
||||
|
||||
@genesis.command()
|
||||
@click.argument('config_file', type=click.Path(exists=True))
|
||||
@click.option('--output', '-o', 'output_file', help='Output file path')
|
||||
|
||||
67
cli/aitbc_cli/commands/keystore.py
Normal file
67
cli/aitbc_cli/commands/keystore.py
Normal file
@@ -0,0 +1,67 @@
|
||||
import click
|
||||
import importlib.util
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def _load_keystore_script():
|
||||
"""Dynamically load the top-level scripts/keystore.py module."""
|
||||
root = Path(__file__).resolve().parents[3] # /opt/aitbc
|
||||
ks_path = root / "scripts" / "keystore.py"
|
||||
spec = importlib.util.spec_from_file_location("aitbc_scripts_keystore", ks_path)
|
||||
if spec is None or spec.loader is None:
|
||||
raise ImportError(f"Unable to load keystore script from {ks_path}")
|
||||
module = importlib.util.module_from_spec(spec)
|
||||
spec.loader.exec_module(module)
|
||||
return module
|
||||
|
||||
@click.group()
|
||||
def keystore():
|
||||
"""Keystore operations (create wallets/keystores)."""
|
||||
pass
|
||||
|
||||
@keystore.command()
|
||||
@click.option("--address", required=True, help="Wallet address (id) to create")
|
||||
@click.option(
|
||||
"--password-file",
|
||||
default="/opt/aitbc/data/keystore/.password",
|
||||
show_default=True,
|
||||
type=click.Path(exists=True, dir_okay=False),
|
||||
help="Path to password file",
|
||||
)
|
||||
@click.option(
|
||||
"--output",
|
||||
default="/opt/aitbc/data/keystore",
|
||||
show_default=True,
|
||||
help="Directory to write keystore files",
|
||||
)
|
||||
@click.option(
|
||||
"--force",
|
||||
is_flag=True,
|
||||
help="Overwrite existing keystore file if present",
|
||||
)
|
||||
@click.pass_context
|
||||
def create(ctx, address: str, password_file: str, output: str, force: bool):
|
||||
"""Create an encrypted keystore for the given address.
|
||||
|
||||
Examples:
|
||||
aitbc keystore create --address aitbc1genesis
|
||||
aitbc keystore create --address aitbc1treasury --password-file keystore/.password --output keystore
|
||||
"""
|
||||
pwd_path = Path(password_file)
|
||||
with open(pwd_path, "r", encoding="utf-8") as f:
|
||||
password = f.read().strip()
|
||||
out_dir = Path(output) if output else Path("/opt/aitbc/data/keystore")
|
||||
out_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
ks_module = _load_keystore_script()
|
||||
ks_module.create_keystore(address=address, password=password, keystore_dir=out_dir, force=force)
|
||||
click.echo(f"Created keystore for {address} at {out_dir}")
|
||||
|
||||
|
||||
# Helper so other commands (genesis) can reuse the same logic
|
||||
def create_keystore_via_script(address: str, password_file: str = "/opt/aitbc/data/keystore/.password", output_dir: str = "/opt/aitbc/data/keystore", force: bool = False):
|
||||
pwd = Path(password_file).read_text(encoding="utf-8").strip()
|
||||
out_dir = Path(output_dir)
|
||||
out_dir.mkdir(parents=True, exist_ok=True)
|
||||
ks_module = _load_keystore_script()
|
||||
ks_module.create_keystore(address=address, password=pwd, keystore_dir=out_dir, force=force)
|
||||
@@ -46,6 +46,7 @@ from .commands.marketplace_advanced import advanced # Re-enabled after fixing r
|
||||
from .commands.swarm import swarm
|
||||
from .commands.chain import chain
|
||||
from .commands.genesis import genesis
|
||||
from .commands.keystore import keystore
|
||||
from .commands.test_cli import test
|
||||
from .commands.node import node
|
||||
from .commands.analytics import analytics
|
||||
@@ -256,6 +257,7 @@ cli.add_command(ai_group)
|
||||
cli.add_command(swarm)
|
||||
cli.add_command(chain)
|
||||
cli.add_command(genesis)
|
||||
cli.add_command(keystore)
|
||||
cli.add_command(test)
|
||||
cli.add_command(node)
|
||||
cli.add_command(analytics)
|
||||
|
||||
@@ -38,10 +38,10 @@ def check_build_tests():
|
||||
rc, out = sh("poetry check")
|
||||
checks.append(("poetry check", rc == 0, out))
|
||||
# 2) Fast syntax check of CLI package
|
||||
rc, out = sh("python -m py_compile cli/aitbc_cli/__main__.py")
|
||||
rc, out = sh("python3 -m py_compile cli/aitbc_cli/main.py")
|
||||
checks.append(("cli syntax", rc == 0, out if rc != 0 else "OK"))
|
||||
# 3) Minimal test run (dry-run or 1 quick test)
|
||||
rc, out = sh("python -m pytest tests/ -v --collect-only 2>&1 | head -20")
|
||||
rc, out = sh("python3 -m pytest tests/ -v --collect-only 2>&1 | head -20")
|
||||
tests_ok = rc == 0
|
||||
checks.append(("test discovery", tests_ok, out if not tests_ok else f"Collected {out.count('test') if 'test' in out else '?'} tests"))
|
||||
all_ok = all(ok for _, ok, _ in checks)
|
||||
@@ -86,7 +86,7 @@ def check_vulnerabilities():
|
||||
"""Run security audits for Python and Node dependencies."""
|
||||
issues = []
|
||||
# Python: pip-audit (if available)
|
||||
rc, out = sh("bash -c 'pip-audit --requirement <(poetry export --without-hashes) 2>&1'")
|
||||
rc, out = sh("pip-audit --requirement <(poetry export --without-hashes) 2>&1")
|
||||
if rc == 0:
|
||||
# No vulnerabilities
|
||||
pass
|
||||
|
||||
1866
poetry.lock
generated
1866
poetry.lock
generated
File diff suppressed because it is too large
Load Diff
@@ -147,7 +147,7 @@ dev = [
|
||||
"black==24.3.0",
|
||||
"isort==8.0.1",
|
||||
"ruff==0.15.5",
|
||||
"mypy==1.8.0",
|
||||
"mypy>=1.19.1,<2.0.0",
|
||||
"bandit==1.7.5",
|
||||
"types-requests==2.31.0",
|
||||
"types-setuptools==69.0.0",
|
||||
|
||||
@@ -128,8 +128,8 @@ def main() -> None:
|
||||
if args.db_path:
|
||||
os.environ["DB_PATH"] = str(args.db_path)
|
||||
|
||||
from aitbc_chain.config import Settings
|
||||
settings = Settings()
|
||||
from aitbc_chain.config import ChainSettings
|
||||
settings = ChainSettings()
|
||||
|
||||
print(f"[*] Initializing database at {settings.db_path}")
|
||||
init_db()
|
||||
|
||||
@@ -41,6 +41,28 @@ def encrypt_private_key(private_key_hex: str, password: str) -> dict:
|
||||
}
|
||||
|
||||
|
||||
def create_keystore(address: str, password: str, keystore_dir: Path | str = "/opt/aitbc/keystore", force: bool = False) -> Path:
|
||||
"""Create encrypted keystore file and return its path."""
|
||||
keystore_dir = Path(keystore_dir)
|
||||
keystore_dir.mkdir(parents=True, exist_ok=True)
|
||||
out_file = keystore_dir / f"{address}.json"
|
||||
|
||||
if out_file.exists() and not force:
|
||||
raise FileExistsError(f"Keystore file {out_file} exists. Use force=True to overwrite.")
|
||||
|
||||
private_key = secrets.token_hex(32)
|
||||
encrypted = encrypt_private_key(private_key, password)
|
||||
keystore = {
|
||||
"address": address,
|
||||
"crypto": encrypted,
|
||||
"created_at": datetime.utcnow().isoformat() + "Z",
|
||||
}
|
||||
|
||||
out_file.write_text(json.dumps(keystore, indent=2))
|
||||
os.chmod(out_file, 0o600)
|
||||
return out_file
|
||||
|
||||
|
||||
def main() -> None:
|
||||
parser = argparse.ArgumentParser(description="Generate encrypted keystore for an account")
|
||||
parser.add_argument("address", help="Account address (e.g., aitbc1treasury)")
|
||||
|
||||
@@ -39,7 +39,7 @@ def main():
|
||||
|
||||
# 1. Keystore directory and password
|
||||
run(f"mkdir -p {KEYS_DIR}")
|
||||
run(f"chown -R aitbc:aitbc {KEYS_DIR}")
|
||||
run(f"chown -R root:root {KEYS_DIR}")
|
||||
if not PASSWORD_FILE.exists():
|
||||
run(f"openssl rand -hex 32 > {PASSWORD_FILE}")
|
||||
run(f"chmod 600 {PASSWORD_FILE}")
|
||||
@@ -48,7 +48,7 @@ def main():
|
||||
# 2. Generate keystores
|
||||
print("\n=== Generating keystore for aitbc1genesis ===")
|
||||
result = run(
|
||||
f"sudo -u aitbc {NODE_VENV} /opt/aitbc/scripts/keystore.py aitbc1genesis --output-dir {KEYS_DIR} --force",
|
||||
f"{NODE_VENV} /opt/aitbc/scripts/keystore.py aitbc1genesis --output-dir {KEYS_DIR} --force",
|
||||
capture_output=True
|
||||
)
|
||||
print(result.stdout)
|
||||
@@ -65,7 +65,7 @@ def main():
|
||||
|
||||
print("\n=== Generating keystore for aitbc1treasury ===")
|
||||
result = run(
|
||||
f"sudo -u aitbc {NODE_VENV} /opt/aitbc/scripts/keystore.py aitbc1treasury --output-dir {KEYS_DIR} --force",
|
||||
f"{NODE_VENV} /opt/aitbc/scripts/keystore.py aitbc1treasury --output-dir {KEYS_DIR} --force",
|
||||
capture_output=True
|
||||
)
|
||||
print(result.stdout)
|
||||
@@ -82,12 +82,12 @@ def main():
|
||||
|
||||
# 3. Data directory
|
||||
run(f"mkdir -p {DATA_DIR}")
|
||||
run(f"chown -R aitbc:aitbc {DATA_DIR}")
|
||||
run(f"chown -R root:root {DATA_DIR}")
|
||||
|
||||
# 4. Initialize DB
|
||||
os.environ["DB_PATH"] = str(DB_PATH)
|
||||
os.environ["CHAIN_ID"] = CHAIN_ID
|
||||
run(f"sudo -E -u aitbc {NODE_VENV} /opt/aitbc/scripts/init_production_genesis.py --chain-id {CHAIN_ID} --db-path {DB_PATH}")
|
||||
run(f"sudo -E {NODE_VENV} /opt/aitbc/scripts/init_production_genesis.py --chain-id {CHAIN_ID} --db-path {DB_PATH}")
|
||||
|
||||
# 5. Write .env for blockchain node
|
||||
env_content = f"""CHAIN_ID={CHAIN_ID}
|
||||
|
||||
Reference in New Issue
Block a user