chore(security): enhance environment configuration, CI workflows, and wallet daemon with security improvements
- Restructure .env.example with security-focused documentation, service-specific environment file references, and AWS Secrets Manager integration - Update CLI tests workflow to single Python 3.13 version, add pytest-mock dependency, and consolidate test execution with coverage - Add comprehensive security validation to package publishing workflow with manual approval gates, secret scanning, and release
This commit is contained in:
143
cli/aitbc_cli/DISABLED_COMMANDS_CLEANUP.md
Normal file
143
cli/aitbc_cli/DISABLED_COMMANDS_CLEANUP.md
Normal file
@@ -0,0 +1,143 @@
|
||||
# Disabled Commands Cleanup Analysis
|
||||
|
||||
## Overview
|
||||
This document analyzes the currently disabled CLI commands and provides recommendations for cleanup.
|
||||
|
||||
## Disabled Commands
|
||||
|
||||
### 1. `openclaw` - Edge Computing Integration
|
||||
**File**: `cli/aitbc_cli/commands/openclaw.py`
|
||||
**Status**: Commented out in `main.py` line 28
|
||||
**Reason**: "Temporarily disabled due to command registration issues"
|
||||
|
||||
**Analysis**:
|
||||
- **Size**: 604 lines of code
|
||||
- **Functionality**: OpenClaw integration with edge computing deployment
|
||||
- **Dependencies**: httpx, JSON, time utilities
|
||||
- **Potential Value**: High - edge computing is strategic for AITBC
|
||||
|
||||
**Recommendation**: **FIX AND RE-ENABLE**
|
||||
- Command registration issues are likely minor (naming conflicts)
|
||||
- Edge computing integration is valuable for the platform
|
||||
- Code appears well-structured and complete
|
||||
|
||||
### 2. `marketplace_advanced` - Advanced Marketplace Features
|
||||
**File**: `cli/aitbc_cli/commands/marketplace_advanced.py`
|
||||
**Status**: Commented out in `main.py` line 29
|
||||
**Reason**: "Temporarily disabled due to command registration issues"
|
||||
|
||||
**Analysis**:
|
||||
- **Size**: Unknown (file not found in current tree)
|
||||
- **Functionality**: Advanced marketplace features
|
||||
- **Potential Value**: Medium to High
|
||||
|
||||
**Recommendation**: **LOCATE AND EVALUATE**
|
||||
- File appears to be missing from current codebase
|
||||
- May have been accidentally deleted
|
||||
- Check git history to recover if valuable
|
||||
|
||||
### 3. `marketplace_cmd` - Alternative Marketplace Implementation
|
||||
**File**: `cli/aitbc_cli/commands/marketplace_cmd.py`
|
||||
**Status**: Exists but disabled (comment in main.py line 18)
|
||||
**Reason**: Conflict with main `marketplace.py`
|
||||
|
||||
**Analysis**:
|
||||
- **Size**: 495 lines of code
|
||||
- **Functionality**: Global chain marketplace commands
|
||||
- **Dependencies**: GlobalChainMarketplace, multichain config
|
||||
- **Conflict**: Names conflict with existing `marketplace.py`
|
||||
|
||||
**Recommendation**: **MERGE OR DELETE**
|
||||
- Compare with existing `marketplace.py`
|
||||
- Merge unique features if valuable
|
||||
- Delete if redundant
|
||||
|
||||
## Cleanup Action Items
|
||||
|
||||
### Immediate Actions (High Priority)
|
||||
1. **Fix `openclaw` registration**
|
||||
```bash
|
||||
# Uncomment line 28 in main.py
|
||||
# from .commands.openclaw import openclaw
|
||||
# cli.add_command(openclaw)
|
||||
```
|
||||
- Test for naming conflicts
|
||||
- Rename if necessary (e.g., `edge-deploy`)
|
||||
|
||||
2. **Resolve `marketplace` conflict**
|
||||
```bash
|
||||
# Compare files
|
||||
diff cli/aitbc_cli/commands/marketplace.py cli/aitbc_cli/commands/marketplace_cmd.py
|
||||
```
|
||||
- Merge unique features
|
||||
- Delete redundant file
|
||||
|
||||
3. **Locate missing `marketplace_advanced`**
|
||||
```bash
|
||||
git log --all -- "**/marketplace_advanced.py"
|
||||
git checkout HEAD~1 -- cli/aitbc_cli/commands/marketplace_advanced.py
|
||||
```
|
||||
|
||||
### Code Quality Improvements
|
||||
1. **Add command registration validation**
|
||||
- Prevent future naming conflicts
|
||||
- Add unit tests for command registration
|
||||
|
||||
2. **Document command dependencies**
|
||||
- Add clear documentation for each command
|
||||
- Include dependency requirements
|
||||
|
||||
3. **Create command deprecation policy**
|
||||
- Formal process for disabling commands
|
||||
- Clear timeline for removal
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### Current State
|
||||
- Disabled commands are still present in repository
|
||||
- No security risk from disabled code
|
||||
- Potential for confusion among users
|
||||
|
||||
### Recommendations
|
||||
- Remove truly unused commands to reduce attack surface
|
||||
- Keep valuable disabled code in separate branch if needed
|
||||
- Document reasons for disabling
|
||||
|
||||
## Testing Requirements
|
||||
|
||||
Before re-enabling any disabled command:
|
||||
1. **Unit Tests**: Verify all functions work correctly
|
||||
2. **Integration Tests**: Test with live coordinator API
|
||||
3. **Command Registration**: Ensure no conflicts with existing commands
|
||||
4. **Security Review**: Validate no security vulnerabilities
|
||||
5. **Documentation**: Update help text and usage examples
|
||||
|
||||
## Timeline
|
||||
|
||||
| Week | Action | Status |
|
||||
|------|--------|--------|
|
||||
| 1 | Fix openclaw registration issues | 🔄 In Progress |
|
||||
| 1 | Resolve marketplace command conflicts | 🔄 In Progress |
|
||||
| 2 | Locate and evaluate marketplace_advanced | ⏳ Pending |
|
||||
| 2 | Add comprehensive tests | ⏳ Pending |
|
||||
| 3 | Update documentation | ⏳ Pending |
|
||||
|
||||
## Risk Assessment
|
||||
|
||||
| Command | Risk Level | Action |
|
||||
|---------|-----------|--------|
|
||||
| openclaw | Low | Re-enable after testing |
|
||||
| marketplace_cmd | Low | Merge or delete |
|
||||
| marketplace_advanced | Unknown | Locate and evaluate |
|
||||
|
||||
## Conclusion
|
||||
|
||||
The disabled commands appear to contain valuable functionality that should be restored rather than deleted. The "command registration issues" are likely minor naming conflicts that can be resolved with minimal effort.
|
||||
|
||||
**Next Steps**:
|
||||
1. Fix the registration conflicts
|
||||
2. Test thoroughly
|
||||
3. Re-enable valuable commands
|
||||
4. Remove truly redundant code
|
||||
|
||||
This cleanup will improve CLI functionality without compromising security.
|
||||
@@ -16,7 +16,7 @@ def admin():
|
||||
@admin.command()
|
||||
@click.pass_context
|
||||
def status(ctx):
|
||||
"""Get system status"""
|
||||
"""Show system status"""
|
||||
config = ctx.obj['config']
|
||||
|
||||
try:
|
||||
@@ -30,13 +30,77 @@ def status(ctx):
|
||||
status_data = response.json()
|
||||
output(status_data, ctx.obj['output_format'])
|
||||
else:
|
||||
error(f"Failed to get system status: {response.status_code}")
|
||||
error(f"Failed to get status: {response.status_code}")
|
||||
ctx.exit(1)
|
||||
except Exception as e:
|
||||
error(f"Network error: {e}")
|
||||
ctx.exit(1)
|
||||
|
||||
|
||||
@admin.command()
|
||||
@click.option("--output", type=click.Path(), help="Output report to file")
|
||||
@click.pass_context
|
||||
def audit_verify(ctx, output):
|
||||
"""Verify audit log integrity"""
|
||||
audit_logger = AuditLogger()
|
||||
is_valid, issues = audit_logger.verify_integrity()
|
||||
|
||||
if is_valid:
|
||||
success("Audit log integrity verified - no tampering detected")
|
||||
else:
|
||||
error("Audit log integrity compromised!")
|
||||
for issue in issues:
|
||||
error(f" - {issue}")
|
||||
ctx.exit(1)
|
||||
|
||||
# Export detailed report if requested
|
||||
if output:
|
||||
try:
|
||||
report = audit_logger.export_report(Path(output))
|
||||
success(f"Audit report exported to {output}")
|
||||
|
||||
# Show summary
|
||||
stats = report["audit_report"]["statistics"]
|
||||
output({
|
||||
"total_entries": stats["total_entries"],
|
||||
"unique_actions": stats["unique_actions"],
|
||||
"unique_users": stats["unique_users"],
|
||||
"date_range": stats["date_range"]
|
||||
}, ctx.obj['output_format'])
|
||||
except Exception as e:
|
||||
error(f"Failed to export report: {e}")
|
||||
|
||||
|
||||
@admin.command()
|
||||
@click.option("--limit", default=50, help="Number of entries to show")
|
||||
@click.option("--action", help="Filter by action type")
|
||||
@click.option("--search", help="Search query")
|
||||
@click.pass_context
|
||||
def audit_logs(ctx, limit: int, action: str, search: str):
|
||||
"""View audit logs with integrity verification"""
|
||||
audit_logger = AuditLogger()
|
||||
|
||||
try:
|
||||
if search:
|
||||
entries = audit_logger.search_logs(search, limit)
|
||||
else:
|
||||
entries = audit_logger.get_logs(limit, action)
|
||||
|
||||
if not entries:
|
||||
warning("No audit entries found")
|
||||
return
|
||||
|
||||
# Show entries
|
||||
output({
|
||||
"total_entries": len(entries),
|
||||
"entries": entries
|
||||
}, ctx.obj['output_format'])
|
||||
|
||||
except Exception as e:
|
||||
error(f"Failed to read audit logs: {e}")
|
||||
ctx.exit(1)
|
||||
|
||||
|
||||
@admin.command()
|
||||
@click.option("--limit", default=50, help="Number of jobs to show")
|
||||
@click.option("--status", help="Filter by status")
|
||||
|
||||
@@ -546,9 +546,9 @@ def progress(ctx, agent_id: str, metrics: str):
|
||||
@click.argument("agent_id")
|
||||
@click.option("--format", default="onnx", type=click.Choice(["onnx", "pickle", "torch"]),
|
||||
help="Export format")
|
||||
@click.option("--output", type=click.Path(), help="Output file path")
|
||||
@click.option("--output-path", type=click.Path(), help="Output file path")
|
||||
@click.pass_context
|
||||
def export(ctx, agent_id: str, format: str, output: Optional[str]):
|
||||
def export(ctx, agent_id: str, format: str, output_path: Optional[str]):
|
||||
"""Export learned agent model"""
|
||||
config = ctx.obj['config']
|
||||
|
||||
@@ -563,10 +563,10 @@ def export(ctx, agent_id: str, format: str, output: Optional[str]):
|
||||
)
|
||||
|
||||
if response.status_code == 200:
|
||||
if output:
|
||||
with open(output, 'wb') as f:
|
||||
if output_path:
|
||||
with open(output_path, 'wb') as f:
|
||||
f.write(response.content)
|
||||
success(f"Model exported to {output}")
|
||||
success(f"Model exported to {output_path}")
|
||||
else:
|
||||
# Output metadata about the export
|
||||
export_info = response.headers.get('X-Export-Info', '{}')
|
||||
|
||||
@@ -25,7 +25,7 @@ def simulate():
|
||||
@click.pass_context
|
||||
def init(ctx, distribute: str, reset: bool):
|
||||
"""Initialize test economy"""
|
||||
home_dir = Path("/home/oib/windsurf/aitbc/home")
|
||||
home_dir = Path("/home/oib/windsurf/aitbc/tests/e2e/fixtures/home")
|
||||
|
||||
if reset:
|
||||
success("Resetting simulation...")
|
||||
@@ -115,7 +115,7 @@ def user():
|
||||
@click.pass_context
|
||||
def create(ctx, type: str, name: str, balance: float):
|
||||
"""Create a test user"""
|
||||
home_dir = Path("/home/oib/windsurf/aitbc/home")
|
||||
home_dir = Path("/home/oib/windsurf/aitbc/tests/e2e/fixtures/home")
|
||||
|
||||
user_id = f"{type}_{name}"
|
||||
wallet_path = home_dir / f"{user_id}_wallet.json"
|
||||
@@ -151,7 +151,7 @@ def create(ctx, type: str, name: str, balance: float):
|
||||
@click.pass_context
|
||||
def list(ctx):
|
||||
"""List all test users"""
|
||||
home_dir = Path("/home/oib/windsurf/aitbc/home")
|
||||
home_dir = Path("/home/oib/windsurf/aitbc/tests/e2e/fixtures/home")
|
||||
|
||||
users = []
|
||||
for wallet_file in home_dir.glob("*_wallet.json"):
|
||||
@@ -181,7 +181,7 @@ def list(ctx):
|
||||
@click.pass_context
|
||||
def balance(ctx, user: str):
|
||||
"""Check user balance"""
|
||||
home_dir = Path("/home/oib/windsurf/aitbc/home")
|
||||
home_dir = Path("/home/oib/windsurf/aitbc/tests/e2e/fixtures/home")
|
||||
wallet_path = home_dir / f"{user}_wallet.json"
|
||||
|
||||
if not wallet_path.exists():
|
||||
@@ -203,7 +203,7 @@ def balance(ctx, user: str):
|
||||
@click.pass_context
|
||||
def fund(ctx, user: str, amount: float):
|
||||
"""Fund a test user"""
|
||||
home_dir = Path("/home/oib/windsurf/aitbc/home")
|
||||
home_dir = Path("/home/oib/windsurf/aitbc/tests/e2e/fixtures/home")
|
||||
|
||||
# Load genesis wallet
|
||||
genesis_path = home_dir / "genesis_wallet.json"
|
||||
|
||||
467
cli/aitbc_cli/commands/test_cli.py
Normal file
467
cli/aitbc_cli/commands/test_cli.py
Normal file
@@ -0,0 +1,467 @@
|
||||
"""
|
||||
AITBC CLI Testing Commands
|
||||
Provides testing and debugging utilities for the AITBC CLI
|
||||
"""
|
||||
|
||||
import click
|
||||
import json
|
||||
import time
|
||||
import tempfile
|
||||
from pathlib import Path
|
||||
from typing import Dict, Any, Optional
|
||||
from unittest.mock import Mock, patch
|
||||
|
||||
from ..utils import output, success, error, warning
|
||||
from ..config import get_config
|
||||
|
||||
|
||||
@click.group()
|
||||
def test():
|
||||
"""Testing and debugging commands for AITBC CLI"""
|
||||
pass
|
||||
|
||||
|
||||
@test.command()
|
||||
@click.option('--format', type=click.Choice(['json', 'table', 'yaml']), default='table', help='Output format')
|
||||
@click.pass_context
|
||||
def environment(ctx, format):
|
||||
"""Test CLI environment and configuration"""
|
||||
config = ctx.obj['config']
|
||||
|
||||
env_info = {
|
||||
'coordinator_url': config.coordinator_url,
|
||||
'api_key': config.api_key,
|
||||
'output_format': ctx.obj['output_format'],
|
||||
'test_mode': ctx.obj['test_mode'],
|
||||
'dry_run': ctx.obj['dry_run'],
|
||||
'timeout': ctx.obj['timeout'],
|
||||
'no_verify': ctx.obj['no_verify'],
|
||||
'log_level': ctx.obj['log_level']
|
||||
}
|
||||
|
||||
if format == 'json':
|
||||
output(json.dumps(env_info, indent=2))
|
||||
else:
|
||||
output("CLI Environment Test Results:")
|
||||
output(f" Coordinator URL: {env_info['coordinator_url']}")
|
||||
output(f" API Key: {env_info['api_key'][:10]}..." if env_info['api_key'] else " API Key: None")
|
||||
output(f" Output Format: {env_info['output_format']}")
|
||||
output(f" Test Mode: {env_info['test_mode']}")
|
||||
output(f" Dry Run: {env_info['dry_run']}")
|
||||
output(f" Timeout: {env_info['timeout']}s")
|
||||
output(f" No Verify: {env_info['no_verify']}")
|
||||
output(f" Log Level: {env_info['log_level']}")
|
||||
|
||||
|
||||
@test.command()
|
||||
@click.option('--endpoint', default='health', help='API endpoint to test')
|
||||
@click.option('--method', default='GET', help='HTTP method')
|
||||
@click.option('--data', help='JSON data to send (for POST/PUT)')
|
||||
@click.pass_context
|
||||
def api(ctx, endpoint, method, data):
|
||||
"""Test API connectivity"""
|
||||
config = ctx.obj['config']
|
||||
|
||||
try:
|
||||
import httpx
|
||||
|
||||
# Prepare request
|
||||
url = f"{config.coordinator_url.rstrip('/')}/api/v1/{endpoint.lstrip('/')}"
|
||||
headers = {}
|
||||
if config.api_key:
|
||||
headers['Authorization'] = f"Bearer {config.api_key}"
|
||||
|
||||
# Prepare data
|
||||
json_data = None
|
||||
if data and method in ['POST', 'PUT']:
|
||||
json_data = json.loads(data)
|
||||
|
||||
# Make request
|
||||
with httpx.Client(verify=not ctx.obj['no_verify'], timeout=ctx.obj['timeout']) as client:
|
||||
if method == 'GET':
|
||||
response = client.get(url, headers=headers)
|
||||
elif method == 'POST':
|
||||
response = client.post(url, headers=headers, json=json_data)
|
||||
elif method == 'PUT':
|
||||
response = client.put(url, headers=headers, json=json_data)
|
||||
else:
|
||||
raise ValueError(f"Unsupported method: {method}")
|
||||
|
||||
# Display results
|
||||
output(f"API Test Results:")
|
||||
output(f" URL: {url}")
|
||||
output(f" Method: {method}")
|
||||
output(f" Status Code: {response.status_code}")
|
||||
output(f" Response Time: {response.elapsed.total_seconds():.3f}s")
|
||||
|
||||
if response.status_code == 200:
|
||||
success("✅ API test successful")
|
||||
try:
|
||||
response_data = response.json()
|
||||
output("Response Data:")
|
||||
output(json.dumps(response_data, indent=2))
|
||||
except:
|
||||
output(f"Response: {response.text}")
|
||||
else:
|
||||
error(f"❌ API test failed with status {response.status_code}")
|
||||
output(f"Response: {response.text}")
|
||||
|
||||
except ImportError:
|
||||
error("❌ httpx not installed. Install with: pip install httpx")
|
||||
except Exception as e:
|
||||
error(f"❌ API test failed: {str(e)}")
|
||||
|
||||
|
||||
@test.command()
|
||||
@click.option('--wallet-name', default='test-wallet', help='Test wallet name')
|
||||
@click.option('--test-operations', is_flag=True, default=True, help='Test wallet operations')
|
||||
@click.pass_context
|
||||
def wallet(ctx, wallet_name, test_operations):
|
||||
"""Test wallet functionality"""
|
||||
from ..commands.wallet import wallet as wallet_cmd
|
||||
|
||||
output(f"Testing wallet functionality with wallet: {wallet_name}")
|
||||
|
||||
# Test wallet creation
|
||||
try:
|
||||
result = ctx.invoke(wallet_cmd, ['create', wallet_name])
|
||||
if result.exit_code == 0:
|
||||
success(f"✅ Wallet '{wallet_name}' created successfully")
|
||||
else:
|
||||
error(f"❌ Wallet creation failed: {result.output}")
|
||||
return
|
||||
except Exception as e:
|
||||
error(f"❌ Wallet creation error: {str(e)}")
|
||||
return
|
||||
|
||||
if test_operations:
|
||||
# Test wallet balance
|
||||
try:
|
||||
result = ctx.invoke(wallet_cmd, ['balance'])
|
||||
if result.exit_code == 0:
|
||||
success("✅ Wallet balance check successful")
|
||||
output(f"Balance output: {result.output}")
|
||||
else:
|
||||
warning(f"⚠️ Wallet balance check failed: {result.output}")
|
||||
except Exception as e:
|
||||
warning(f"⚠️ Wallet balance check error: {str(e)}")
|
||||
|
||||
# Test wallet info
|
||||
try:
|
||||
result = ctx.invoke(wallet_cmd, ['info'])
|
||||
if result.exit_code == 0:
|
||||
success("✅ Wallet info check successful")
|
||||
output(f"Info output: {result.output}")
|
||||
else:
|
||||
warning(f"⚠️ Wallet info check failed: {result.output}")
|
||||
except Exception as e:
|
||||
warning(f"⚠️ Wallet info check error: {str(e)}")
|
||||
|
||||
|
||||
@test.command()
|
||||
@click.option('--job-type', default='ml_inference', help='Type of job to test')
|
||||
@click.option('--test-data', default='{"model": "test-model", "input": "test-data"}', help='Test job data')
|
||||
@click.pass_context
|
||||
def job(ctx, job_type, test_data):
|
||||
"""Test job submission and management"""
|
||||
from ..commands.client import client as client_cmd
|
||||
|
||||
output(f"Testing job submission with type: {job_type}")
|
||||
|
||||
try:
|
||||
# Parse test data
|
||||
job_data = json.loads(test_data)
|
||||
job_data['type'] = job_type
|
||||
|
||||
# Test job submission
|
||||
with tempfile.NamedTemporaryFile(mode='w', suffix='.json', delete=False) as f:
|
||||
json.dump(job_data, f)
|
||||
temp_file = f.name
|
||||
|
||||
try:
|
||||
result = ctx.invoke(client_cmd, ['submit', '--job-file', temp_file])
|
||||
if result.exit_code == 0:
|
||||
success("✅ Job submission successful")
|
||||
output(f"Submission output: {result.output}")
|
||||
|
||||
# Extract job ID if present
|
||||
if 'job_id' in result.output:
|
||||
import re
|
||||
job_id_match = re.search(r'job[_\s-]?id[:\s]+(\w+)', result.output, re.IGNORECASE)
|
||||
if job_id_match:
|
||||
job_id = job_id_match.group(1)
|
||||
output(f"Extracted job ID: {job_id}")
|
||||
|
||||
# Test job status
|
||||
try:
|
||||
status_result = ctx.invoke(client_cmd, ['status', job_id])
|
||||
if status_result.exit_code == 0:
|
||||
success("✅ Job status check successful")
|
||||
output(f"Status output: {status_result.output}")
|
||||
else:
|
||||
warning(f"⚠️ Job status check failed: {status_result.output}")
|
||||
except Exception as e:
|
||||
warning(f"⚠️ Job status check error: {str(e)}")
|
||||
else:
|
||||
error(f"❌ Job submission failed: {result.output}")
|
||||
finally:
|
||||
# Clean up temp file
|
||||
Path(temp_file).unlink(missing_ok=True)
|
||||
|
||||
except json.JSONDecodeError:
|
||||
error(f"❌ Invalid test data JSON: {test_data}")
|
||||
except Exception as e:
|
||||
error(f"❌ Job test failed: {str(e)}")
|
||||
|
||||
|
||||
@test.command()
|
||||
@click.option('--gpu-type', default='RTX 3080', help='GPU type to test')
|
||||
@click.option('--price', type=float, default=0.1, help='Price to test')
|
||||
@click.pass_context
|
||||
def marketplace(ctx, gpu_type, price):
|
||||
"""Test marketplace functionality"""
|
||||
from ..commands.marketplace import marketplace as marketplace_cmd
|
||||
|
||||
output(f"Testing marketplace functionality for {gpu_type} at {price} AITBC/hour")
|
||||
|
||||
# Test marketplace offers listing
|
||||
try:
|
||||
result = ctx.invoke(marketplace_cmd, ['offers', 'list'])
|
||||
if result.exit_code == 0:
|
||||
success("✅ Marketplace offers list successful")
|
||||
output(f"Offers output: {result.output}")
|
||||
else:
|
||||
warning(f"⚠️ Marketplace offers list failed: {result.output}")
|
||||
except Exception as e:
|
||||
warning(f"⚠️ Marketplace offers list error: {str(e)}")
|
||||
|
||||
# Test marketplace pricing
|
||||
try:
|
||||
result = ctx.invoke(marketplace_cmd, ['pricing', gpu_type])
|
||||
if result.exit_code == 0:
|
||||
success("✅ Marketplace pricing check successful")
|
||||
output(f"Pricing output: {result.output}")
|
||||
else:
|
||||
warning(f"⚠️ Marketplace pricing check failed: {result.output}")
|
||||
except Exception as e:
|
||||
warning(f"⚠️ Marketplace pricing check error: {str(e)}")
|
||||
|
||||
|
||||
@test.command()
|
||||
@click.option('--test-endpoints', is_flag=True, default=True, help='Test blockchain endpoints')
|
||||
@click.pass_context
|
||||
def blockchain(ctx, test_endpoints):
|
||||
"""Test blockchain functionality"""
|
||||
from ..commands.blockchain import blockchain as blockchain_cmd
|
||||
|
||||
output("Testing blockchain functionality")
|
||||
|
||||
if test_endpoints:
|
||||
# Test blockchain info
|
||||
try:
|
||||
result = ctx.invoke(blockchain_cmd, ['info'])
|
||||
if result.exit_code == 0:
|
||||
success("✅ Blockchain info successful")
|
||||
output(f"Info output: {result.output}")
|
||||
else:
|
||||
warning(f"⚠️ Blockchain info failed: {result.output}")
|
||||
except Exception as e:
|
||||
warning(f"⚠️ Blockchain info error: {str(e)}")
|
||||
|
||||
# Test chain status
|
||||
try:
|
||||
result = ctx.invoke(blockchain_cmd, ['status'])
|
||||
if result.exit_code == 0:
|
||||
success("✅ Blockchain status successful")
|
||||
output(f"Status output: {result.output}")
|
||||
else:
|
||||
warning(f"⚠️ Blockchain status failed: {result.output}")
|
||||
except Exception as e:
|
||||
warning(f"⚠️ Blockchain status error: {str(e)}")
|
||||
|
||||
|
||||
@test.command()
|
||||
@click.option('--component', help='Specific component to test (wallet, job, marketplace, blockchain, api)')
|
||||
@click.option('--verbose', is_flag=True, help='Verbose test output')
|
||||
@click.pass_context
|
||||
def integration(ctx, component, verbose):
|
||||
"""Run integration tests"""
|
||||
|
||||
if component:
|
||||
output(f"Running integration tests for: {component}")
|
||||
|
||||
if component == 'wallet':
|
||||
ctx.invoke(wallet, ['--test-operations'])
|
||||
elif component == 'job':
|
||||
ctx.invoke(job, [])
|
||||
elif component == 'marketplace':
|
||||
ctx.invoke(marketplace, [])
|
||||
elif component == 'blockchain':
|
||||
ctx.invoke(blockchain, [])
|
||||
elif component == 'api':
|
||||
ctx.invoke(api, ['--endpoint', 'health'])
|
||||
else:
|
||||
error(f"Unknown component: {component}")
|
||||
return
|
||||
else:
|
||||
output("Running full integration test suite...")
|
||||
|
||||
# Test API connectivity first
|
||||
output("1. Testing API connectivity...")
|
||||
ctx.invoke(api, ['--endpoint', 'health'])
|
||||
|
||||
# Test wallet functionality
|
||||
output("2. Testing wallet functionality...")
|
||||
ctx.invoke(wallet, ['--wallet-name', 'integration-test-wallet'])
|
||||
|
||||
# Test marketplace functionality
|
||||
output("3. Testing marketplace functionality...")
|
||||
ctx.invoke(marketplace, [])
|
||||
|
||||
# Test blockchain functionality
|
||||
output("4. Testing blockchain functionality...")
|
||||
ctx.invoke(blockchain, [])
|
||||
|
||||
# Test job functionality
|
||||
output("5. Testing job functionality...")
|
||||
ctx.invoke(job, [])
|
||||
|
||||
success("✅ Integration test suite completed")
|
||||
|
||||
|
||||
@test.command()
|
||||
@click.option('--output-file', help='Save test results to file')
|
||||
@click.pass_context
|
||||
def diagnostics(ctx, output_file):
|
||||
"""Run comprehensive diagnostics"""
|
||||
|
||||
diagnostics_data = {
|
||||
'timestamp': time.time(),
|
||||
'test_mode': ctx.obj['test_mode'],
|
||||
'dry_run': ctx.obj['dry_run'],
|
||||
'config': {
|
||||
'coordinator_url': ctx.obj['config'].coordinator_url,
|
||||
'api_key_present': bool(ctx.obj['config'].api_key),
|
||||
'output_format': ctx.obj['output_format']
|
||||
}
|
||||
}
|
||||
|
||||
output("Running comprehensive diagnostics...")
|
||||
|
||||
# Test 1: Environment
|
||||
output("1. Testing environment...")
|
||||
try:
|
||||
ctx.invoke(environment, ['--format', 'json'])
|
||||
diagnostics_data['environment'] = 'PASS'
|
||||
except Exception as e:
|
||||
diagnostics_data['environment'] = f'FAIL: {str(e)}'
|
||||
error(f"Environment test failed: {str(e)}")
|
||||
|
||||
# Test 2: API Connectivity
|
||||
output("2. Testing API connectivity...")
|
||||
try:
|
||||
ctx.invoke(api, ['--endpoint', 'health'])
|
||||
diagnostics_data['api_connectivity'] = 'PASS'
|
||||
except Exception as e:
|
||||
diagnostics_data['api_connectivity'] = f'FAIL: {str(e)}'
|
||||
error(f"API connectivity test failed: {str(e)}")
|
||||
|
||||
# Test 3: Wallet Creation
|
||||
output("3. Testing wallet creation...")
|
||||
try:
|
||||
ctx.invoke(wallet, ['--wallet-name', 'diagnostics-test', '--test-operations'])
|
||||
diagnostics_data['wallet_creation'] = 'PASS'
|
||||
except Exception as e:
|
||||
diagnostics_data['wallet_creation'] = f'FAIL: {str(e)}'
|
||||
error(f"Wallet creation test failed: {str(e)}")
|
||||
|
||||
# Test 4: Marketplace
|
||||
output("4. Testing marketplace...")
|
||||
try:
|
||||
ctx.invoke(marketplace, [])
|
||||
diagnostics_data['marketplace'] = 'PASS'
|
||||
except Exception as e:
|
||||
diagnostics_data['marketplace'] = f'FAIL: {str(e)}'
|
||||
error(f"Marketplace test failed: {str(e)}")
|
||||
|
||||
# Generate summary
|
||||
passed_tests = sum(1 for v in diagnostics_data.values() if isinstance(v, str) and v == 'PASS')
|
||||
total_tests = len([k for k in diagnostics_data.keys() if k in ['environment', 'api_connectivity', 'wallet_creation', 'marketplace']])
|
||||
|
||||
diagnostics_data['summary'] = {
|
||||
'total_tests': total_tests,
|
||||
'passed_tests': passed_tests,
|
||||
'failed_tests': total_tests - passed_tests,
|
||||
'success_rate': (passed_tests / total_tests * 100) if total_tests > 0 else 0
|
||||
}
|
||||
|
||||
# Display results
|
||||
output("\n" + "="*50)
|
||||
output("DIAGNOSTICS SUMMARY")
|
||||
output("="*50)
|
||||
output(f"Total Tests: {diagnostics_data['summary']['total_tests']}")
|
||||
output(f"Passed: {diagnostics_data['summary']['passed_tests']}")
|
||||
output(f"Failed: {diagnostics_data['summary']['failed_tests']}")
|
||||
output(f"Success Rate: {diagnostics_data['summary']['success_rate']:.1f}%")
|
||||
|
||||
if diagnostics_data['summary']['success_rate'] == 100:
|
||||
success("✅ All diagnostics passed!")
|
||||
else:
|
||||
warning(f"⚠️ {diagnostics_data['summary']['failed_tests']} test(s) failed")
|
||||
|
||||
# Save to file if requested
|
||||
if output_file:
|
||||
with open(output_file, 'w') as f:
|
||||
json.dump(diagnostics_data, f, indent=2)
|
||||
output(f"Diagnostics saved to: {output_file}")
|
||||
|
||||
|
||||
@test.command()
|
||||
def mock():
|
||||
"""Generate mock data for testing"""
|
||||
|
||||
mock_data = {
|
||||
'wallet': {
|
||||
'name': 'test-wallet',
|
||||
'address': 'aitbc1test123456789abcdef',
|
||||
'balance': 1000.0,
|
||||
'transactions': []
|
||||
},
|
||||
'job': {
|
||||
'id': 'test-job-123',
|
||||
'type': 'ml_inference',
|
||||
'status': 'pending',
|
||||
'requirements': {
|
||||
'gpu_type': 'RTX 3080',
|
||||
'memory_gb': 8,
|
||||
'duration_minutes': 30
|
||||
}
|
||||
},
|
||||
'marketplace': {
|
||||
'offers': [
|
||||
{
|
||||
'id': 'offer-1',
|
||||
'provider': 'test-provider',
|
||||
'gpu_type': 'RTX 3080',
|
||||
'price_per_hour': 0.1,
|
||||
'available': True
|
||||
}
|
||||
]
|
||||
},
|
||||
'blockchain': {
|
||||
'chain_id': 'aitbc-testnet',
|
||||
'block_height': 1000,
|
||||
'network_status': 'active'
|
||||
}
|
||||
}
|
||||
|
||||
output("Mock data for testing:")
|
||||
output(json.dumps(mock_data, indent=2))
|
||||
|
||||
# Save to temp file
|
||||
with tempfile.NamedTemporaryFile(mode='w', suffix='.json', delete=False) as f:
|
||||
json.dump(mock_data, f, indent=2)
|
||||
temp_file = f.name
|
||||
|
||||
output(f"Mock data saved to: {temp_file}")
|
||||
return temp_file
|
||||
@@ -727,8 +727,12 @@ def send(ctx, to_address: str, amount: float, description: Optional[str]):
|
||||
wallet_data["transactions"].append(transaction)
|
||||
wallet_data["balance"] = balance - amount
|
||||
|
||||
with open(wallet_path, "w") as f:
|
||||
json.dump(wallet_data, f, indent=2)
|
||||
# Use _save_wallet to preserve encryption
|
||||
if wallet_data.get("encrypted"):
|
||||
password = _get_wallet_password(wallet_name)
|
||||
_save_wallet(wallet_path, wallet_data, password)
|
||||
else:
|
||||
_save_wallet(wallet_path, wallet_data)
|
||||
|
||||
success(f"Sent {amount} AITBC to {to_address}")
|
||||
output(
|
||||
@@ -932,8 +936,7 @@ def unstake(ctx, stake_id: str):
|
||||
error(f"Wallet '{wallet_name}' not found")
|
||||
return
|
||||
|
||||
with open(wallet_path, "r") as f:
|
||||
wallet_data = json.load(f)
|
||||
wallet_data = _load_wallet(wallet_path, wallet_name)
|
||||
|
||||
staking = wallet_data.get("staking", [])
|
||||
stake_record = next(
|
||||
@@ -1145,13 +1148,85 @@ def multisig_propose(
|
||||
)
|
||||
|
||||
|
||||
@wallet.command(name="multisig-challenge")
|
||||
@click.option("--wallet", "wallet_name", required=True, help="Multisig wallet name")
|
||||
@click.argument("tx_id")
|
||||
@click.pass_context
|
||||
def multisig_challenge(ctx, wallet_name: str, tx_id: str):
|
||||
"""Create a cryptographic challenge for multisig transaction signing"""
|
||||
wallet_dir = ctx.obj.get("wallet_dir", Path.home() / ".aitbc" / "wallets")
|
||||
multisig_path = wallet_dir / f"{wallet_name}_multisig.json"
|
||||
|
||||
if not multisig_path.exists():
|
||||
error(f"Multisig wallet '{wallet_name}' not found")
|
||||
return
|
||||
|
||||
with open(multisig_path) as f:
|
||||
ms_data = json.load(f)
|
||||
|
||||
# Find pending transaction
|
||||
pending = ms_data.get("pending_transactions", [])
|
||||
tx = next(
|
||||
(t for t in pending if t["tx_id"] == tx_id and t["status"] == "pending"), None
|
||||
)
|
||||
|
||||
if not tx:
|
||||
error(f"Pending transaction '{tx_id}' not found")
|
||||
return
|
||||
|
||||
# Import crypto utilities
|
||||
from ..utils.crypto_utils import multisig_security
|
||||
|
||||
try:
|
||||
# Create signing request
|
||||
signing_request = multisig_security.create_signing_request(tx, wallet_name)
|
||||
|
||||
output({
|
||||
"tx_id": tx_id,
|
||||
"wallet": wallet_name,
|
||||
"challenge": signing_request["challenge"],
|
||||
"nonce": signing_request["nonce"],
|
||||
"message": signing_request["message"],
|
||||
"instructions": [
|
||||
"1. Copy the challenge string above",
|
||||
"2. Sign it with your private key using: aitbc wallet sign-challenge <challenge> <private-key>",
|
||||
"3. Use the returned signature with: aitbc wallet multisig-sign --wallet <wallet> <tx_id> --signer <address> --signature <signature>"
|
||||
]
|
||||
}, ctx.obj.get("output_format", "table"))
|
||||
|
||||
except Exception as e:
|
||||
error(f"Failed to create challenge: {e}")
|
||||
|
||||
|
||||
@wallet.command(name="sign-challenge")
|
||||
@click.argument("challenge")
|
||||
@click.argument("private_key")
|
||||
@click.pass_context
|
||||
def sign_challenge(ctx, challenge: str, private_key: str):
|
||||
"""Sign a cryptographic challenge (for testing multisig)"""
|
||||
from ..utils.crypto_utils import sign_challenge
|
||||
|
||||
try:
|
||||
signature = sign_challenge(challenge, private_key)
|
||||
|
||||
output({
|
||||
"challenge": challenge,
|
||||
"signature": signature,
|
||||
"message": "Use this signature with multisig-sign command"
|
||||
}, ctx.obj.get("output_format", "table"))
|
||||
|
||||
except Exception as e:
|
||||
error(f"Failed to sign challenge: {e}")
|
||||
|
||||
|
||||
@wallet.command(name="multisig-sign")
|
||||
@click.option("--wallet", "wallet_name", required=True, help="Multisig wallet name")
|
||||
@click.argument("tx_id")
|
||||
@click.option("--signer", required=True, help="Signer address")
|
||||
@click.option("--signature", required=True, help="Cryptographic signature (hex)")
|
||||
@click.pass_context
|
||||
def multisig_sign(ctx, wallet_name: str, tx_id: str, signer: str):
|
||||
"""Sign a pending multisig transaction"""
|
||||
def multisig_sign(ctx, wallet_name: str, tx_id: str, signer: str, signature: str):
|
||||
"""Sign a pending multisig transaction with cryptographic verification"""
|
||||
wallet_dir = ctx.obj.get("wallet_dir", Path.home() / ".aitbc" / "wallets")
|
||||
multisig_path = wallet_dir / f"{wallet_name}_multisig.json"
|
||||
|
||||
@@ -1167,6 +1242,16 @@ def multisig_sign(ctx, wallet_name: str, tx_id: str, signer: str):
|
||||
ctx.exit(1)
|
||||
return
|
||||
|
||||
# Import crypto utilities
|
||||
from ..utils.crypto_utils import multisig_security
|
||||
|
||||
# Verify signature cryptographically
|
||||
success, message = multisig_security.verify_and_add_signature(tx_id, signature, signer)
|
||||
if not success:
|
||||
error(f"Signature verification failed: {message}")
|
||||
ctx.exit(1)
|
||||
return
|
||||
|
||||
pending = ms_data.get("pending_transactions", [])
|
||||
tx = next(
|
||||
(t for t in pending if t["tx_id"] == tx_id and t["status"] == "pending"), None
|
||||
@@ -1177,11 +1262,21 @@ def multisig_sign(ctx, wallet_name: str, tx_id: str, signer: str):
|
||||
ctx.exit(1)
|
||||
return
|
||||
|
||||
if signer in tx["signatures"]:
|
||||
error(f"'{signer}' has already signed this transaction")
|
||||
return
|
||||
# Check if already signed
|
||||
for sig in tx.get("signatures", []):
|
||||
if sig["signer"] == signer:
|
||||
error(f"'{signer}' has already signed this transaction")
|
||||
return
|
||||
|
||||
tx["signatures"].append(signer)
|
||||
# Add cryptographic signature
|
||||
if "signatures" not in tx:
|
||||
tx["signatures"] = []
|
||||
|
||||
tx["signatures"].append({
|
||||
"signer": signer,
|
||||
"signature": signature,
|
||||
"timestamp": datetime.now().isoformat()
|
||||
})
|
||||
|
||||
# Check if threshold met
|
||||
if len(tx["signatures"]) >= ms_data["threshold"]:
|
||||
|
||||
@@ -30,6 +30,11 @@ from .commands.optimize import optimize
|
||||
from .commands.swarm import swarm
|
||||
from .commands.chain import chain
|
||||
from .commands.genesis import genesis
|
||||
from .commands.test_cli import test
|
||||
from .commands.node import node
|
||||
from .commands.analytics import analytics
|
||||
from .commands.agent_comm import agent_comm
|
||||
from .commands.deployment import deploy
|
||||
from .plugins import plugin, load_plugins
|
||||
|
||||
|
||||
@@ -65,10 +70,32 @@ from .plugins import plugin, load_plugins
|
||||
default=None,
|
||||
help="Path to config file"
|
||||
)
|
||||
@click.option(
|
||||
"--test-mode",
|
||||
is_flag=True,
|
||||
help="Enable test mode (uses mock data and test endpoints)"
|
||||
)
|
||||
@click.option(
|
||||
"--dry-run",
|
||||
is_flag=True,
|
||||
help="Dry run mode (show what would be done without executing)"
|
||||
)
|
||||
@click.option(
|
||||
"--timeout",
|
||||
type=int,
|
||||
default=30,
|
||||
help="Request timeout in seconds (useful for testing)"
|
||||
)
|
||||
@click.option(
|
||||
"--no-verify",
|
||||
is_flag=True,
|
||||
help="Skip SSL certificate verification (testing only)"
|
||||
)
|
||||
@click.version_option(version=__version__, prog_name="aitbc")
|
||||
@click.pass_context
|
||||
def cli(ctx, url: Optional[str], api_key: Optional[str], output: str,
|
||||
verbose: int, debug: bool, config_file: Optional[str]):
|
||||
verbose: int, debug: bool, config_file: Optional[str], test_mode: bool,
|
||||
dry_run: bool, timeout: int, no_verify: bool):
|
||||
"""
|
||||
AITBC CLI - Command Line Interface for AITBC Network
|
||||
|
||||
@@ -93,6 +120,17 @@ def cli(ctx, url: Optional[str], api_key: Optional[str], output: str,
|
||||
ctx.obj['config'] = config
|
||||
ctx.obj['output_format'] = output
|
||||
ctx.obj['log_level'] = log_level
|
||||
ctx.obj['test_mode'] = test_mode
|
||||
ctx.obj['dry_run'] = dry_run
|
||||
ctx.obj['timeout'] = timeout
|
||||
ctx.obj['no_verify'] = no_verify
|
||||
|
||||
# Apply test mode settings
|
||||
if test_mode:
|
||||
config.coordinator_url = config.coordinator_url or "http://localhost:8000"
|
||||
config.api_key = config.api_key or "test-api-key"
|
||||
if not config.api_key.startswith("test-"):
|
||||
config.api_key = f"test-{config.api_key}"
|
||||
|
||||
|
||||
# Add command groups
|
||||
@@ -111,23 +149,14 @@ cli.add_command(exchange)
|
||||
cli.add_command(agent)
|
||||
cli.add_command(multimodal)
|
||||
cli.add_command(optimize)
|
||||
# cli.add_command(openclaw) # Temporarily disabled due to command registration issues
|
||||
# cli.add_command(advanced) # Temporarily disabled due to command registration issues
|
||||
cli.add_command(swarm)
|
||||
from .commands.chain import chain # NEW: Multi-chain management
|
||||
from .commands.genesis import genesis # NEW: Genesis block commands
|
||||
from .commands.node import node # NEW: Node management commands
|
||||
from .commands.analytics import analytics # NEW: Analytics and monitoring
|
||||
from .commands.agent_comm import agent_comm # NEW: Cross-chain agent communication
|
||||
# from .commands.marketplace_cmd import marketplace # NEW: Global chain marketplace - disabled due to conflict
|
||||
from .commands.deployment import deploy # NEW: Production deployment and scaling
|
||||
cli.add_command(chain) # NEW: Multi-chain management
|
||||
cli.add_command(genesis) # NEW: Genesis block commands
|
||||
cli.add_command(node) # NEW: Node management commands
|
||||
cli.add_command(analytics) # NEW: Analytics and monitoring
|
||||
cli.add_command(agent_comm) # NEW: Cross-chain agent communication
|
||||
# cli.add_command(marketplace) # NEW: Global chain marketplace - disabled due to conflict
|
||||
cli.add_command(deploy) # NEW: Production deployment and scaling
|
||||
cli.add_command(chain)
|
||||
cli.add_command(genesis)
|
||||
cli.add_command(test)
|
||||
cli.add_command(node)
|
||||
cli.add_command(analytics)
|
||||
cli.add_command(agent_comm)
|
||||
cli.add_command(deploy)
|
||||
cli.add_command(plugin)
|
||||
load_plugins(cli)
|
||||
|
||||
|
||||
26
cli/aitbc_cli/security/__init__.py
Normal file
26
cli/aitbc_cli/security/__init__.py
Normal file
@@ -0,0 +1,26 @@
|
||||
"""
|
||||
AITBC CLI Security Module
|
||||
|
||||
Security controls and policies for CLI operations, including
|
||||
translation security, input validation, and operation auditing.
|
||||
"""
|
||||
|
||||
from .translation_policy import (
|
||||
CLITranslationSecurityManager,
|
||||
SecurityLevel,
|
||||
TranslationMode,
|
||||
cli_translation_security,
|
||||
secure_translation,
|
||||
configure_translation_security,
|
||||
get_translation_security_report
|
||||
)
|
||||
|
||||
__all__ = [
|
||||
"CLITranslationSecurityManager",
|
||||
"SecurityLevel",
|
||||
"TranslationMode",
|
||||
"cli_translation_security",
|
||||
"secure_translation",
|
||||
"configure_translation_security",
|
||||
"get_translation_security_report"
|
||||
]
|
||||
420
cli/aitbc_cli/security/translation_policy.py
Normal file
420
cli/aitbc_cli/security/translation_policy.py
Normal file
@@ -0,0 +1,420 @@
|
||||
"""
|
||||
AITBC CLI Translation Security Policy
|
||||
|
||||
This module implements strict security controls for CLI translation functionality,
|
||||
ensuring that translation services never compromise security-sensitive operations.
|
||||
"""
|
||||
|
||||
import os
|
||||
import logging
|
||||
from typing import Dict, List, Optional, Union
|
||||
from dataclasses import dataclass
|
||||
from enum import Enum
|
||||
import asyncio
|
||||
from pathlib import Path
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class SecurityLevel(Enum):
|
||||
"""Security levels for CLI operations"""
|
||||
CRITICAL = "critical" # Security-sensitive commands (agent strategy, wallet operations)
|
||||
HIGH = "high" # Important operations (deployment, configuration)
|
||||
MEDIUM = "medium" # Standard operations (monitoring, reporting)
|
||||
LOW = "low" # Informational operations (help, status)
|
||||
|
||||
|
||||
class TranslationMode(Enum):
|
||||
"""Translation operation modes"""
|
||||
DISABLED = "disabled" # No translation allowed
|
||||
LOCAL_ONLY = "local_only" # Only local translation (no external APIs)
|
||||
FALLBACK = "fallback" # External APIs with local fallback
|
||||
FULL = "full" # Full translation capabilities
|
||||
|
||||
|
||||
@dataclass
|
||||
class SecurityPolicy:
|
||||
"""Security policy for translation usage"""
|
||||
security_level: SecurityLevel
|
||||
translation_mode: TranslationMode
|
||||
allow_external_apis: bool
|
||||
require_explicit_consent: bool
|
||||
timeout_seconds: int
|
||||
max_retries: int
|
||||
cache_translations: bool
|
||||
|
||||
|
||||
@dataclass
|
||||
class TranslationRequest:
|
||||
"""Translation request with security context"""
|
||||
text: str
|
||||
target_language: str
|
||||
source_language: str = "en"
|
||||
command_name: Optional[str] = None
|
||||
security_level: SecurityLevel = SecurityLevel.MEDIUM
|
||||
user_consent: bool = False
|
||||
|
||||
|
||||
@dataclass
|
||||
class TranslationResponse:
|
||||
"""Translation response with security metadata"""
|
||||
translated_text: str
|
||||
success: bool
|
||||
method_used: str
|
||||
security_compliant: bool
|
||||
warning_messages: List[str]
|
||||
fallback_used: bool
|
||||
|
||||
|
||||
class CLITranslationSecurityManager:
|
||||
"""
|
||||
Security manager for CLI translation operations
|
||||
|
||||
Enforces strict policies to ensure translation never compromises
|
||||
security-sensitive operations.
|
||||
"""
|
||||
|
||||
def __init__(self, config_path: Optional[Path] = None):
|
||||
self.config_path = config_path or Path.home() / ".aitbc" / "translation_security.json"
|
||||
self.policies = self._load_default_policies()
|
||||
self.security_log = []
|
||||
|
||||
def _load_default_policies(self) -> Dict[SecurityLevel, SecurityPolicy]:
|
||||
"""Load default security policies"""
|
||||
return {
|
||||
SecurityLevel.CRITICAL: SecurityPolicy(
|
||||
security_level=SecurityLevel.CRITICAL,
|
||||
translation_mode=TranslationMode.DISABLED,
|
||||
allow_external_apis=False,
|
||||
require_explicit_consent=True,
|
||||
timeout_seconds=0,
|
||||
max_retries=0,
|
||||
cache_translations=False
|
||||
),
|
||||
SecurityLevel.HIGH: SecurityPolicy(
|
||||
security_level=SecurityLevel.HIGH,
|
||||
translation_mode=TranslationMode.LOCAL_ONLY,
|
||||
allow_external_apis=False,
|
||||
require_explicit_consent=True,
|
||||
timeout_seconds=5,
|
||||
max_retries=1,
|
||||
cache_translations=True
|
||||
),
|
||||
SecurityLevel.MEDIUM: SecurityPolicy(
|
||||
security_level=SecurityLevel.MEDIUM,
|
||||
translation_mode=TranslationMode.FALLBACK,
|
||||
allow_external_apis=True,
|
||||
require_explicit_consent=False,
|
||||
timeout_seconds=10,
|
||||
max_retries=2,
|
||||
cache_translations=True
|
||||
),
|
||||
SecurityLevel.LOW: SecurityPolicy(
|
||||
security_level=SecurityLevel.LOW,
|
||||
translation_mode=TranslationMode.FULL,
|
||||
allow_external_apis=True,
|
||||
require_explicit_consent=False,
|
||||
timeout_seconds=15,
|
||||
max_retries=3,
|
||||
cache_translations=True
|
||||
)
|
||||
}
|
||||
|
||||
def get_command_security_level(self, command_name: str) -> SecurityLevel:
|
||||
"""Determine security level for a command"""
|
||||
# Critical security-sensitive commands
|
||||
critical_commands = {
|
||||
'agent', 'strategy', 'wallet', 'sign', 'deploy', 'genesis',
|
||||
'transfer', 'send', 'approve', 'mint', 'burn', 'stake'
|
||||
}
|
||||
|
||||
# High importance commands
|
||||
high_commands = {
|
||||
'config', 'node', 'chain', 'marketplace', 'swap', 'liquidity',
|
||||
'governance', 'vote', 'proposal'
|
||||
}
|
||||
|
||||
# Medium importance commands
|
||||
medium_commands = {
|
||||
'balance', 'status', 'monitor', 'analytics', 'logs', 'history',
|
||||
'simulate', 'test'
|
||||
}
|
||||
|
||||
# Low importance commands (informational)
|
||||
low_commands = {
|
||||
'help', 'version', 'info', 'list', 'show', 'explain'
|
||||
}
|
||||
|
||||
command_base = command_name.split()[0].lower()
|
||||
|
||||
if command_base in critical_commands:
|
||||
return SecurityLevel.CRITICAL
|
||||
elif command_base in high_commands:
|
||||
return SecurityLevel.HIGH
|
||||
elif command_base in medium_commands:
|
||||
return SecurityLevel.MEDIUM
|
||||
elif command_base in low_commands:
|
||||
return SecurityLevel.LOW
|
||||
else:
|
||||
# Default to medium for unknown commands
|
||||
return SecurityLevel.MEDIUM
|
||||
|
||||
async def translate_with_security(self, request: TranslationRequest) -> TranslationResponse:
|
||||
"""
|
||||
Translate text with security enforcement
|
||||
|
||||
Args:
|
||||
request: Translation request with security context
|
||||
|
||||
Returns:
|
||||
Translation response with security metadata
|
||||
"""
|
||||
# Determine security level if not provided
|
||||
if request.security_level == SecurityLevel.MEDIUM and request.command_name:
|
||||
request.security_level = self.get_command_security_level(request.command_name)
|
||||
|
||||
policy = self.policies[request.security_level]
|
||||
warnings = []
|
||||
|
||||
# Log security check
|
||||
self._log_security_check(request, policy)
|
||||
|
||||
# Check if translation is allowed
|
||||
if policy.translation_mode == TranslationMode.DISABLED:
|
||||
return TranslationResponse(
|
||||
translated_text=request.text, # Return original
|
||||
success=True,
|
||||
method_used="disabled",
|
||||
security_compliant=True,
|
||||
warning_messages=["Translation disabled for security-sensitive operation"],
|
||||
fallback_used=False
|
||||
)
|
||||
|
||||
# Check user consent for high-security operations
|
||||
if policy.require_explicit_consent and not request.user_consent:
|
||||
return TranslationResponse(
|
||||
translated_text=request.text, # Return original
|
||||
success=True,
|
||||
method_used="consent_required",
|
||||
security_compliant=True,
|
||||
warning_messages=["User consent required for translation"],
|
||||
fallback_used=False
|
||||
)
|
||||
|
||||
# Attempt translation based on policy
|
||||
try:
|
||||
if policy.translation_mode == TranslationMode.LOCAL_ONLY:
|
||||
result = await self._local_translate(request)
|
||||
method_used = "local"
|
||||
elif policy.translation_mode == TranslationMode.FALLBACK:
|
||||
# Try external first, fallback to local
|
||||
result, fallback_used = await self._external_translate_with_fallback(request, policy)
|
||||
method_used = "external_fallback"
|
||||
else: # FULL
|
||||
result = await self._external_translate(request, policy)
|
||||
method_used = "external"
|
||||
fallback_used = False
|
||||
|
||||
return TranslationResponse(
|
||||
translated_text=result,
|
||||
success=True,
|
||||
method_used=method_used,
|
||||
security_compliant=True,
|
||||
warning_messages=warnings,
|
||||
fallback_used=fallback_used if method_used == "external_fallback" else False
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Translation failed: {e}")
|
||||
warnings.append(f"Translation failed: {str(e)}")
|
||||
|
||||
# Always fallback to original text for security
|
||||
return TranslationResponse(
|
||||
translated_text=request.text,
|
||||
success=False,
|
||||
method_used="error_fallback",
|
||||
security_compliant=True,
|
||||
warning_messages=warnings + ["Falling back to original text for security"],
|
||||
fallback_used=True
|
||||
)
|
||||
|
||||
async def _local_translate(self, request: TranslationRequest) -> str:
|
||||
"""Local translation without external APIs"""
|
||||
# Simple local translation dictionary for common terms
|
||||
local_translations = {
|
||||
# Help messages
|
||||
"help": {"es": "ayuda", "fr": "aide", "de": "hilfe", "zh": "帮助"},
|
||||
"error": {"es": "error", "fr": "erreur", "de": "fehler", "zh": "错误"},
|
||||
"success": {"es": "éxito", "fr": "succès", "de": "erfolg", "zh": "成功"},
|
||||
"warning": {"es": "advertencia", "fr": "avertissement", "de": "warnung", "zh": "警告"},
|
||||
"status": {"es": "estado", "fr": "statut", "de": "status", "zh": "状态"},
|
||||
"balance": {"es": "saldo", "fr": "solde", "de": "guthaben", "zh": "余额"},
|
||||
"wallet": {"es": "cartera", "fr": "portefeuille", "de": "börse", "zh": "钱包"},
|
||||
"transaction": {"es": "transacción", "fr": "transaction", "de": "transaktion", "zh": "交易"},
|
||||
"blockchain": {"es": "cadena de bloques", "fr": "chaîne de blocs", "de": "blockchain", "zh": "区块链"},
|
||||
"agent": {"es": "agente", "fr": "agent", "de": "agent", "zh": "代理"},
|
||||
}
|
||||
|
||||
# Simple word-by-word translation
|
||||
words = request.text.lower().split()
|
||||
translated_words = []
|
||||
|
||||
for word in words:
|
||||
if word in local_translations and request.target_language in local_translations[word]:
|
||||
translated_words.append(local_translations[word][request.target_language])
|
||||
else:
|
||||
translated_words.append(word) # Keep original if no translation
|
||||
|
||||
return " ".join(translated_words)
|
||||
|
||||
async def _external_translate_with_fallback(self, request: TranslationRequest, policy: SecurityPolicy) -> tuple[str, bool]:
|
||||
"""External translation with local fallback"""
|
||||
try:
|
||||
# Try external translation first
|
||||
result = await self._external_translate(request, policy)
|
||||
return result, False
|
||||
except Exception as e:
|
||||
logger.warning(f"External translation failed, using local fallback: {e}")
|
||||
result = await self._local_translate(request)
|
||||
return result, True
|
||||
|
||||
async def _external_translate(self, request: TranslationRequest, policy: SecurityPolicy) -> str:
|
||||
"""External translation with timeout and retry logic"""
|
||||
if not policy.allow_external_apis:
|
||||
raise Exception("External APIs not allowed for this security level")
|
||||
|
||||
# This would integrate with external translation services
|
||||
# For security, we'll implement a mock that demonstrates the pattern
|
||||
await asyncio.sleep(0.1) # Simulate API call
|
||||
|
||||
# Mock translation - in reality, this would call external APIs
|
||||
return f"[Translated to {request.target_language}: {request.text}]"
|
||||
|
||||
def _log_security_check(self, request: TranslationRequest, policy: SecurityPolicy):
|
||||
"""Log security check for audit trail"""
|
||||
log_entry = {
|
||||
"timestamp": asyncio.get_event_loop().time(),
|
||||
"command": request.command_name,
|
||||
"security_level": request.security_level.value,
|
||||
"translation_mode": policy.translation_mode.value,
|
||||
"target_language": request.target_language,
|
||||
"user_consent": request.user_consent,
|
||||
"text_length": len(request.text)
|
||||
}
|
||||
|
||||
self.security_log.append(log_entry)
|
||||
|
||||
# Keep only last 1000 entries
|
||||
if len(self.security_log) > 1000:
|
||||
self.security_log = self.security_log[-1000:]
|
||||
|
||||
def get_security_summary(self) -> Dict:
|
||||
"""Get summary of security checks"""
|
||||
if not self.security_log:
|
||||
return {"total_checks": 0, "message": "No security checks performed"}
|
||||
|
||||
total_checks = len(self.security_log)
|
||||
by_level = {}
|
||||
by_language = {}
|
||||
|
||||
for entry in self.security_log:
|
||||
level = entry["security_level"]
|
||||
lang = entry["target_language"]
|
||||
|
||||
by_level[level] = by_level.get(level, 0) + 1
|
||||
by_language[lang] = by_language.get(lang, 0) + 1
|
||||
|
||||
return {
|
||||
"total_checks": total_checks,
|
||||
"by_security_level": by_level,
|
||||
"by_target_language": by_language,
|
||||
"recent_checks": self.security_log[-10:] # Last 10 checks
|
||||
}
|
||||
|
||||
def is_translation_allowed(self, command_name: str, target_language: str) -> bool:
|
||||
"""Quick check if translation is allowed for a command"""
|
||||
security_level = self.get_command_security_level(command_name)
|
||||
policy = self.policies[security_level]
|
||||
|
||||
return policy.translation_mode != TranslationMode.DISABLED
|
||||
|
||||
def get_security_policy_for_command(self, command_name: str) -> SecurityPolicy:
|
||||
"""Get security policy for a specific command"""
|
||||
security_level = self.get_command_security_level(command_name)
|
||||
return self.policies[security_level]
|
||||
|
||||
|
||||
# Global security manager instance
|
||||
cli_translation_security = CLITranslationSecurityManager()
|
||||
|
||||
|
||||
# Decorator for CLI commands to enforce translation security
|
||||
def secure_translation(allowed_languages: Optional[List[str]] = None, require_consent: bool = False):
|
||||
"""
|
||||
Decorator to enforce translation security on CLI commands
|
||||
|
||||
Args:
|
||||
allowed_languages: List of allowed target languages
|
||||
require_consent: Whether to require explicit user consent
|
||||
"""
|
||||
def decorator(func):
|
||||
async def wrapper(*args, **kwargs):
|
||||
# This would integrate with the CLI command framework
|
||||
# to enforce translation policies
|
||||
return await func(*args, **kwargs)
|
||||
return wrapper
|
||||
return decorator
|
||||
|
||||
|
||||
# Security policy configuration functions
|
||||
def configure_translation_security(
|
||||
critical_level: str = "disabled",
|
||||
high_level: str = "local_only",
|
||||
medium_level: str = "fallback",
|
||||
low_level: str = "full"
|
||||
):
|
||||
"""Configure translation security policies"""
|
||||
mode_mapping = {
|
||||
"disabled": TranslationMode.DISABLED,
|
||||
"local_only": TranslationMode.LOCAL_ONLY,
|
||||
"fallback": TranslationMode.FALLBACK,
|
||||
"full": TranslationMode.FULL
|
||||
}
|
||||
|
||||
cli_translation_security.policies[SecurityLevel.CRITICAL].translation_mode = mode_mapping[critical_level]
|
||||
cli_translation_security.policies[SecurityLevel.HIGH].translation_mode = mode_mapping[high_level]
|
||||
cli_translation_security.policies[SecurityLevel.MEDIUM].translation_mode = mode_mapping[medium_level]
|
||||
cli_translation_security.policies[SecurityLevel.LOW].translation_mode = mode_mapping[low_level]
|
||||
|
||||
|
||||
def get_translation_security_report() -> Dict:
|
||||
"""Get comprehensive translation security report"""
|
||||
return {
|
||||
"security_policies": {
|
||||
level.value: policy.translation_mode.value
|
||||
for level, policy in cli_translation_security.policies.items()
|
||||
},
|
||||
"security_summary": cli_translation_security.get_security_summary(),
|
||||
"critical_commands": [
|
||||
cmd for cmd in ['agent', 'strategy', 'wallet', 'sign', 'deploy']
|
||||
if cli_translation_security.get_command_security_level(cmd) == SecurityLevel.CRITICAL
|
||||
],
|
||||
"recommendations": _get_security_recommendations()
|
||||
}
|
||||
|
||||
|
||||
def _get_security_recommendations() -> List[str]:
|
||||
"""Get security recommendations"""
|
||||
recommendations = []
|
||||
|
||||
# Check if critical commands have proper restrictions
|
||||
for cmd in ['agent', 'strategy', 'wallet', 'sign']:
|
||||
if cli_translation_security.is_translation_allowed(cmd, 'es'):
|
||||
recommendations.append(f"Consider disabling translation for '{cmd}' command")
|
||||
|
||||
# Check for external API usage in sensitive operations
|
||||
critical_policy = cli_translation_security.policies[SecurityLevel.CRITICAL]
|
||||
if critical_policy.allow_external_apis:
|
||||
recommendations.append("External APIs should be disabled for critical operations")
|
||||
|
||||
return recommendations
|
||||
@@ -41,39 +41,32 @@ def progress_spinner(description: str = "Working..."):
|
||||
|
||||
|
||||
class AuditLogger:
|
||||
"""Audit logging for CLI operations"""
|
||||
"""Tamper-evident audit logging for CLI operations"""
|
||||
|
||||
def __init__(self, log_dir: Optional[Path] = None):
|
||||
self.log_dir = log_dir or Path.home() / ".aitbc" / "audit"
|
||||
self.log_dir.mkdir(parents=True, exist_ok=True)
|
||||
self.log_file = self.log_dir / "audit.jsonl"
|
||||
# Import secure audit logger
|
||||
from .secure_audit import SecureAuditLogger
|
||||
self._secure_logger = SecureAuditLogger(log_dir)
|
||||
|
||||
def log(self, action: str, details: dict = None, user: str = None):
|
||||
"""Log an audit event"""
|
||||
import datetime
|
||||
entry = {
|
||||
"timestamp": datetime.datetime.now().isoformat(),
|
||||
"action": action,
|
||||
"user": user or os.environ.get("USER", "unknown"),
|
||||
"details": details or {}
|
||||
}
|
||||
with open(self.log_file, "a") as f:
|
||||
f.write(json.dumps(entry) + "\n")
|
||||
"""Log an audit event with cryptographic integrity"""
|
||||
self._secure_logger.log(action, details, user)
|
||||
|
||||
def get_logs(self, limit: int = 50, action_filter: str = None) -> list:
|
||||
"""Read audit log entries"""
|
||||
if not self.log_file.exists():
|
||||
return []
|
||||
entries = []
|
||||
with open(self.log_file) as f:
|
||||
for line in f:
|
||||
line = line.strip()
|
||||
if line:
|
||||
entry = json.loads(line)
|
||||
if action_filter and entry.get("action") != action_filter:
|
||||
continue
|
||||
entries.append(entry)
|
||||
return entries[-limit:]
|
||||
"""Read audit log entries with integrity verification"""
|
||||
return self._secure_logger.get_logs(limit, action_filter)
|
||||
|
||||
def verify_integrity(self) -> Tuple[bool, List[str]]:
|
||||
"""Verify audit log integrity"""
|
||||
return self._secure_logger.verify_integrity()
|
||||
|
||||
def export_report(self, output_file: Optional[Path] = None) -> Dict:
|
||||
"""Export comprehensive audit report"""
|
||||
return self._secure_logger.export_audit_report(output_file)
|
||||
|
||||
def search_logs(self, query: str, limit: int = 50) -> List[Dict]:
|
||||
"""Search audit logs"""
|
||||
return self._secure_logger.search_logs(query, limit)
|
||||
|
||||
|
||||
def _get_fernet_key(key: str = None) -> bytes:
|
||||
@@ -133,7 +126,7 @@ def setup_logging(verbosity: int, debug: bool = False) -> str:
|
||||
return log_level
|
||||
|
||||
|
||||
def output(data: Any, format_type: str = "table", title: str = None):
|
||||
def render(data: Any, format_type: str = "table", title: str = None):
|
||||
"""Format and output data"""
|
||||
if format_type == "json":
|
||||
console.print(json.dumps(data, indent=2, default=str))
|
||||
@@ -176,6 +169,12 @@ def output(data: Any, format_type: str = "table", title: str = None):
|
||||
console.print(data)
|
||||
|
||||
|
||||
# Backward compatibility alias
|
||||
def output(data: Any, format_type: str = "table", title: str = None):
|
||||
"""Deprecated: use render() instead - kept for backward compatibility"""
|
||||
return render(data, format_type, title)
|
||||
|
||||
|
||||
def error(message: str):
|
||||
"""Print error message"""
|
||||
console.print(Panel(f"[red]Error: {message}[/red]", title="❌"))
|
||||
@@ -267,7 +266,30 @@ def create_http_client_with_retry(
|
||||
|
||||
for attempt in range(self.max_retries + 1):
|
||||
try:
|
||||
return super().handle_request(request)
|
||||
response = super().handle_request(request)
|
||||
|
||||
# Check for retryable HTTP status codes
|
||||
if hasattr(response, 'status_code'):
|
||||
retryable_codes = {429, 502, 503, 504}
|
||||
if response.status_code in retryable_codes:
|
||||
last_exception = httpx.HTTPStatusError(
|
||||
f"Retryable status code {response.status_code}",
|
||||
request=request,
|
||||
response=response
|
||||
)
|
||||
|
||||
if attempt == self.max_retries:
|
||||
break
|
||||
|
||||
delay = min(
|
||||
self.base_delay * (self.backoff_factor ** attempt),
|
||||
self.max_delay
|
||||
)
|
||||
time.sleep(delay)
|
||||
continue
|
||||
|
||||
return response
|
||||
|
||||
except (httpx.NetworkError, httpx.TimeoutException) as e:
|
||||
last_exception = e
|
||||
|
||||
|
||||
233
cli/aitbc_cli/utils/crypto_utils.py
Normal file
233
cli/aitbc_cli/utils/crypto_utils.py
Normal file
@@ -0,0 +1,233 @@
|
||||
"""
|
||||
Cryptographic Utilities for CLI Security
|
||||
Provides real signature verification for multisig operations
|
||||
"""
|
||||
|
||||
import hashlib
|
||||
import secrets
|
||||
from typing import Dict, Optional, Tuple
|
||||
from eth_account import Account
|
||||
from eth_utils import to_checksum_address, keccak
|
||||
import json
|
||||
|
||||
|
||||
def create_signature_challenge(tx_data: Dict, nonce: str) -> str:
|
||||
"""
|
||||
Create a cryptographic challenge for transaction signing
|
||||
|
||||
Args:
|
||||
tx_data: Transaction data to sign
|
||||
nonce: Unique nonce to prevent replay attacks
|
||||
|
||||
Returns:
|
||||
Challenge string to be signed
|
||||
"""
|
||||
# Create deterministic challenge from transaction data
|
||||
challenge_data = {
|
||||
"tx_id": tx_data.get("tx_id"),
|
||||
"to": tx_data.get("to"),
|
||||
"amount": tx_data.get("amount"),
|
||||
"nonce": nonce,
|
||||
"timestamp": tx_data.get("timestamp")
|
||||
}
|
||||
|
||||
# Sort keys for deterministic ordering
|
||||
challenge_str = json.dumps(challenge_data, sort_keys=True, separators=(',', ':'))
|
||||
challenge_hash = keccak(challenge_str.encode())
|
||||
|
||||
return f"AITBC_MULTISIG_CHALLENGE:{challenge_hash.hex()}"
|
||||
|
||||
|
||||
def verify_signature(
|
||||
challenge: str,
|
||||
signature: str,
|
||||
signer_address: str
|
||||
) -> bool:
|
||||
"""
|
||||
Verify that a signature was created by the specified signer
|
||||
|
||||
Args:
|
||||
challenge: Challenge string that was signed
|
||||
signature: Hex signature string
|
||||
signer_address: Expected signer address
|
||||
|
||||
Returns:
|
||||
True if signature is valid
|
||||
"""
|
||||
try:
|
||||
# Remove 0x prefix if present
|
||||
if signature.startswith("0x"):
|
||||
signature = signature[2:]
|
||||
|
||||
# Convert to bytes
|
||||
signature_bytes = bytes.fromhex(signature)
|
||||
|
||||
# Recover address from signature
|
||||
message_hash = keccak(challenge.encode())
|
||||
recovered_address = Account.recover_message(
|
||||
signable_hash=message_hash,
|
||||
signature=signature_bytes
|
||||
)
|
||||
|
||||
# Compare with expected signer
|
||||
return to_checksum_address(recovered_address) == to_checksum_address(signer_address)
|
||||
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
|
||||
def sign_challenge(challenge: str, private_key: str) -> str:
|
||||
"""
|
||||
Sign a challenge with a private key
|
||||
|
||||
Args:
|
||||
challenge: Challenge string to sign
|
||||
private_key: Private key in hex format
|
||||
|
||||
Returns:
|
||||
Signature as hex string
|
||||
"""
|
||||
try:
|
||||
# Remove 0x prefix if present
|
||||
if private_key.startswith("0x"):
|
||||
private_key = private_key[2:]
|
||||
|
||||
account = Account.from_key("0x" + private_key)
|
||||
message_hash = keccak(challenge.encode())
|
||||
signature = account.sign_message(message_hash)
|
||||
|
||||
return "0x" + signature.signature.hex()
|
||||
|
||||
except Exception as e:
|
||||
raise ValueError(f"Failed to sign challenge: {e}")
|
||||
|
||||
|
||||
def generate_nonce() -> str:
|
||||
"""Generate a secure nonce for transaction challenges"""
|
||||
return secrets.token_hex(16)
|
||||
|
||||
|
||||
def validate_multisig_transaction(tx_data: Dict) -> Tuple[bool, str]:
|
||||
"""
|
||||
Validate multisig transaction structure
|
||||
|
||||
Args:
|
||||
tx_data: Transaction data to validate
|
||||
|
||||
Returns:
|
||||
Tuple of (is_valid, error_message)
|
||||
"""
|
||||
required_fields = ["tx_id", "to", "amount", "timestamp", "nonce"]
|
||||
|
||||
for field in required_fields:
|
||||
if field not in tx_data:
|
||||
return False, f"Missing required field: {field}"
|
||||
|
||||
# Validate address format
|
||||
try:
|
||||
to_checksum_address(tx_data["to"])
|
||||
except Exception:
|
||||
return False, "Invalid recipient address format"
|
||||
|
||||
# Validate amount
|
||||
try:
|
||||
amount = float(tx_data["amount"])
|
||||
if amount <= 0:
|
||||
return False, "Amount must be positive"
|
||||
except Exception:
|
||||
return False, "Invalid amount format"
|
||||
|
||||
return True, ""
|
||||
|
||||
|
||||
class MultisigSecurityManager:
|
||||
"""Security manager for multisig operations"""
|
||||
|
||||
def __init__(self):
|
||||
self.pending_challenges: Dict[str, Dict] = {}
|
||||
|
||||
def create_signing_request(
|
||||
self,
|
||||
tx_data: Dict,
|
||||
multisig_wallet: str
|
||||
) -> Dict[str, str]:
|
||||
"""
|
||||
Create a signing request with cryptographic challenge
|
||||
|
||||
Args:
|
||||
tx_data: Transaction data
|
||||
multisig_wallet: Multisig wallet identifier
|
||||
|
||||
Returns:
|
||||
Signing request with challenge
|
||||
"""
|
||||
# Validate transaction
|
||||
is_valid, error = validate_multisig_transaction(tx_data)
|
||||
if not is_valid:
|
||||
raise ValueError(f"Invalid transaction: {error}")
|
||||
|
||||
# Generate nonce and challenge
|
||||
nonce = generate_nonce()
|
||||
challenge = create_signature_challenge(tx_data, nonce)
|
||||
|
||||
# Store challenge for verification
|
||||
self.pending_challenges[tx_data["tx_id"]] = {
|
||||
"challenge": challenge,
|
||||
"tx_data": tx_data,
|
||||
"multisig_wallet": multisig_wallet,
|
||||
"nonce": nonce,
|
||||
"created_at": secrets.token_hex(8)
|
||||
}
|
||||
|
||||
return {
|
||||
"tx_id": tx_data["tx_id"],
|
||||
"challenge": challenge,
|
||||
"nonce": nonce,
|
||||
"signers_required": len(tx_data.get("required_signers", [])),
|
||||
"message": f"Please sign this challenge to authorize transaction {tx_data['tx_id']}"
|
||||
}
|
||||
|
||||
def verify_and_add_signature(
|
||||
self,
|
||||
tx_id: str,
|
||||
signature: str,
|
||||
signer_address: str
|
||||
) -> Tuple[bool, str]:
|
||||
"""
|
||||
Verify signature and add to transaction
|
||||
|
||||
Args:
|
||||
tx_id: Transaction ID
|
||||
signature: Signature to verify
|
||||
signer_address: Address of signer
|
||||
|
||||
Returns:
|
||||
Tuple of (success, message)
|
||||
"""
|
||||
if tx_id not in self.pending_challenges:
|
||||
return False, "Transaction not found or expired"
|
||||
|
||||
challenge_data = self.pending_challenges[tx_id]
|
||||
challenge = challenge_data["challenge"]
|
||||
|
||||
# Verify signature
|
||||
if not verify_signature(challenge, signature, signer_address):
|
||||
return False, f"Invalid signature for signer {signer_address}"
|
||||
|
||||
# Check if signer is authorized
|
||||
tx_data = challenge_data["tx_data"]
|
||||
authorized_signers = tx_data.get("required_signers", [])
|
||||
|
||||
if signer_address not in authorized_signers:
|
||||
return False, f"Signer {signer_address} is not authorized"
|
||||
|
||||
return True, "Signature verified successfully"
|
||||
|
||||
def cleanup_challenge(self, tx_id: str):
|
||||
"""Clean up challenge after transaction completion"""
|
||||
if tx_id in self.pending_challenges:
|
||||
del self.pending_challenges[tx_id]
|
||||
|
||||
|
||||
# Global security manager instance
|
||||
multisig_security = MultisigSecurityManager()
|
||||
335
cli/aitbc_cli/utils/secure_audit.py
Normal file
335
cli/aitbc_cli/utils/secure_audit.py
Normal file
@@ -0,0 +1,335 @@
|
||||
"""
|
||||
Tamper-Evident Audit Logger
|
||||
Provides cryptographic integrity for audit logs
|
||||
"""
|
||||
|
||||
import json
|
||||
import hashlib
|
||||
import secrets
|
||||
from pathlib import Path
|
||||
from datetime import datetime
|
||||
from typing import Dict, List, Optional, Tuple
|
||||
from eth_utils import keccak
|
||||
|
||||
|
||||
class SecureAuditLogger:
|
||||
"""
|
||||
Tamper-evident audit logger with cryptographic integrity
|
||||
Each entry includes hash of previous entry for chain integrity
|
||||
"""
|
||||
|
||||
def __init__(self, log_dir: Optional[Path] = None):
|
||||
self.log_dir = log_dir or Path.home() / ".aitbc" / "audit"
|
||||
self.log_dir.mkdir(parents=True, exist_ok=True)
|
||||
self.log_file = self.log_dir / "audit_secure.jsonl"
|
||||
self.integrity_file = self.log_dir / "integrity.json"
|
||||
|
||||
# Initialize integrity tracking
|
||||
self._init_integrity()
|
||||
|
||||
def _init_integrity(self):
|
||||
"""Initialize integrity tracking"""
|
||||
if not self.integrity_file.exists():
|
||||
integrity_data = {
|
||||
"genesis_hash": None,
|
||||
"last_hash": None,
|
||||
"entry_count": 0,
|
||||
"created_at": datetime.utcnow().isoformat(),
|
||||
"version": "1.0"
|
||||
}
|
||||
with open(self.integrity_file, "w") as f:
|
||||
json.dump(integrity_data, f, indent=2)
|
||||
|
||||
def _get_integrity_data(self) -> Dict:
|
||||
"""Get current integrity data"""
|
||||
with open(self.integrity_file, "r") as f:
|
||||
return json.load(f)
|
||||
|
||||
def _update_integrity(self, entry_hash: str):
|
||||
"""Update integrity tracking"""
|
||||
integrity_data = self._get_integrity_data()
|
||||
|
||||
if integrity_data["genesis_hash"] is None:
|
||||
integrity_data["genesis_hash"] = entry_hash
|
||||
|
||||
integrity_data["last_hash"] = entry_hash
|
||||
integrity_data["entry_count"] += 1
|
||||
integrity_data["last_updated"] = datetime.utcnow().isoformat()
|
||||
|
||||
with open(self.integrity_file, "w") as f:
|
||||
json.dump(integrity_data, f, indent=2)
|
||||
|
||||
def _create_entry_hash(self, entry: Dict, previous_hash: Optional[str] = None) -> str:
|
||||
"""
|
||||
Create cryptographic hash for audit entry
|
||||
|
||||
Args:
|
||||
entry: Audit entry data
|
||||
previous_hash: Hash of previous entry for chain integrity
|
||||
|
||||
Returns:
|
||||
Entry hash
|
||||
"""
|
||||
# Create canonical representation
|
||||
entry_data = {
|
||||
"timestamp": entry["timestamp"],
|
||||
"action": entry["action"],
|
||||
"user": entry["user"],
|
||||
"details": entry["details"],
|
||||
"previous_hash": previous_hash,
|
||||
"nonce": entry.get("nonce", "")
|
||||
}
|
||||
|
||||
# Sort keys for deterministic ordering
|
||||
entry_str = json.dumps(entry_data, sort_keys=True, separators=(',', ':'))
|
||||
return keccak(entry_str.encode()).hex()
|
||||
|
||||
def log(self, action: str, details: dict = None, user: str = None):
|
||||
"""
|
||||
Log an audit event with cryptographic integrity
|
||||
|
||||
Args:
|
||||
action: Action being logged
|
||||
details: Additional details
|
||||
user: User performing action
|
||||
"""
|
||||
# Get previous hash for chain integrity
|
||||
integrity_data = self._get_integrity_data()
|
||||
previous_hash = integrity_data["last_hash"]
|
||||
|
||||
# Create audit entry
|
||||
entry = {
|
||||
"timestamp": datetime.utcnow().isoformat(),
|
||||
"action": action,
|
||||
"user": user or "unknown",
|
||||
"details": details or {},
|
||||
"nonce": secrets.token_hex(16)
|
||||
}
|
||||
|
||||
# Create entry hash
|
||||
entry_hash = self._create_entry_hash(entry, previous_hash)
|
||||
entry["entry_hash"] = entry_hash
|
||||
|
||||
# Write to log file
|
||||
with open(self.log_file, "a") as f:
|
||||
f.write(json.dumps(entry) + "\n")
|
||||
|
||||
# Update integrity tracking
|
||||
self._update_integrity(entry_hash)
|
||||
|
||||
def verify_integrity(self) -> Tuple[bool, List[str]]:
|
||||
"""
|
||||
Verify the integrity of the entire audit log
|
||||
|
||||
Returns:
|
||||
Tuple of (is_valid, issues)
|
||||
"""
|
||||
if not self.log_file.exists():
|
||||
return True, ["No audit log exists"]
|
||||
|
||||
issues = []
|
||||
previous_hash = None
|
||||
entry_count = 0
|
||||
|
||||
try:
|
||||
with open(self.log_file, "r") as f:
|
||||
for line_num, line in enumerate(f, 1):
|
||||
if not line.strip():
|
||||
continue
|
||||
|
||||
entry = json.loads(line)
|
||||
entry_count += 1
|
||||
|
||||
# Verify entry hash
|
||||
expected_hash = self._create_entry_hash(entry, previous_hash)
|
||||
actual_hash = entry.get("entry_hash")
|
||||
|
||||
if actual_hash != expected_hash:
|
||||
issues.append(f"Line {line_num}: Hash mismatch - entry may be tampered")
|
||||
|
||||
# Verify chain integrity
|
||||
if previous_hash and entry.get("previous_hash") != previous_hash:
|
||||
issues.append(f"Line {line_num}: Chain integrity broken")
|
||||
|
||||
previous_hash = actual_hash
|
||||
|
||||
# Verify against integrity file
|
||||
integrity_data = self._get_integrity_data()
|
||||
|
||||
if integrity_data["entry_count"] != entry_count:
|
||||
issues.append(f"Entry count mismatch: log has {entry_count}, integrity says {integrity_data['entry_count']}")
|
||||
|
||||
if integrity_data["last_hash"] != previous_hash:
|
||||
issues.append("Final hash mismatch with integrity file")
|
||||
|
||||
return len(issues) == 0, issues
|
||||
|
||||
except Exception as e:
|
||||
return False, [f"Verification failed: {str(e)}"]
|
||||
|
||||
def get_logs(self, limit: int = 50, action_filter: str = None, verify: bool = True) -> List[Dict]:
|
||||
"""
|
||||
Read audit log entries with optional integrity verification
|
||||
|
||||
Args:
|
||||
limit: Maximum number of entries
|
||||
action_filter: Filter by action type
|
||||
verify: Whether to verify integrity
|
||||
|
||||
Returns:
|
||||
List of audit entries
|
||||
"""
|
||||
if verify:
|
||||
is_valid, issues = self.verify_integrity()
|
||||
if not is_valid:
|
||||
raise ValueError(f"Audit log integrity compromised: {issues}")
|
||||
|
||||
if not self.log_file.exists():
|
||||
return []
|
||||
|
||||
entries = []
|
||||
with open(self.log_file) as f:
|
||||
for line in f:
|
||||
line = line.strip()
|
||||
if line:
|
||||
entry = json.loads(line)
|
||||
if action_filter and entry.get("action") != action_filter:
|
||||
continue
|
||||
entries.append(entry)
|
||||
|
||||
return entries[-limit:]
|
||||
|
||||
def export_audit_report(self, output_file: Optional[Path] = None) -> Dict:
|
||||
"""
|
||||
Export comprehensive audit report with integrity verification
|
||||
|
||||
Args:
|
||||
output_file: Optional file to write report
|
||||
|
||||
Returns:
|
||||
Audit report data
|
||||
"""
|
||||
# Verify integrity
|
||||
is_valid, issues = self.verify_integrity()
|
||||
|
||||
# Get statistics
|
||||
all_entries = self.get_logs(limit=10000, verify=False) # Don't double-verify
|
||||
|
||||
# Action statistics
|
||||
action_counts = {}
|
||||
user_counts = {}
|
||||
hourly_counts = {}
|
||||
|
||||
for entry in all_entries:
|
||||
# Action counts
|
||||
action = entry.get("action", "unknown")
|
||||
action_counts[action] = action_counts.get(action, 0) + 1
|
||||
|
||||
# User counts
|
||||
user = entry.get("user", "unknown")
|
||||
user_counts[user] = user_counts.get(user, 0) + 1
|
||||
|
||||
# Hourly counts
|
||||
try:
|
||||
hour = entry["timestamp"][:13] # YYYY-MM-DDTHH
|
||||
hourly_counts[hour] = hourly_counts.get(hour, 0) + 1
|
||||
except:
|
||||
pass
|
||||
|
||||
# Create report
|
||||
report = {
|
||||
"audit_report": {
|
||||
"generated_at": datetime.utcnow().isoformat(),
|
||||
"integrity": {
|
||||
"is_valid": is_valid,
|
||||
"issues": issues
|
||||
},
|
||||
"statistics": {
|
||||
"total_entries": len(all_entries),
|
||||
"unique_actions": len(action_counts),
|
||||
"unique_users": len(user_counts),
|
||||
"date_range": {
|
||||
"first_entry": all_entries[0]["timestamp"] if all_entries else None,
|
||||
"last_entry": all_entries[-1]["timestamp"] if all_entries else None
|
||||
}
|
||||
},
|
||||
"action_breakdown": action_counts,
|
||||
"user_breakdown": user_counts,
|
||||
"recent_activity": hourly_counts
|
||||
},
|
||||
"sample_entries": all_entries[-10:] # Last 10 entries
|
||||
}
|
||||
|
||||
# Write to file if specified
|
||||
if output_file:
|
||||
with open(output_file, "w") as f:
|
||||
json.dump(report, f, indent=2)
|
||||
|
||||
return report
|
||||
|
||||
def search_logs(self, query: str, limit: int = 50) -> List[Dict]:
|
||||
"""
|
||||
Search audit logs for specific content
|
||||
|
||||
Args:
|
||||
query: Search query
|
||||
limit: Maximum results
|
||||
|
||||
Returns:
|
||||
Matching entries
|
||||
"""
|
||||
entries = self.get_logs(limit=1000, verify=False) # Get more for search
|
||||
|
||||
matches = []
|
||||
query_lower = query.lower()
|
||||
|
||||
for entry in entries:
|
||||
# Search in action, user, and details
|
||||
searchable_text = f"{entry.get('action', '')} {entry.get('user', '')} {json.dumps(entry.get('details', {}))}"
|
||||
|
||||
if query_lower in searchable_text.lower():
|
||||
matches.append(entry)
|
||||
if len(matches) >= limit:
|
||||
break
|
||||
|
||||
return matches
|
||||
|
||||
def get_chain_info(self) -> Dict:
|
||||
"""
|
||||
Get information about the audit chain
|
||||
|
||||
Returns:
|
||||
Chain information
|
||||
"""
|
||||
integrity_data = self._get_integrity_data()
|
||||
|
||||
return {
|
||||
"genesis_hash": integrity_data["genesis_hash"],
|
||||
"last_hash": integrity_data["last_hash"],
|
||||
"entry_count": integrity_data["entry_count"],
|
||||
"created_at": integrity_data["created_at"],
|
||||
"last_updated": integrity_data.get("last_updated"),
|
||||
"version": integrity_data["version"],
|
||||
"log_file": str(self.log_file),
|
||||
"integrity_file": str(self.integrity_file)
|
||||
}
|
||||
|
||||
|
||||
# Global secure audit logger instance
|
||||
secure_audit_logger = SecureAuditLogger()
|
||||
|
||||
|
||||
# Convenience functions for backward compatibility
|
||||
def log_action(action: str, details: dict = None, user: str = None):
|
||||
"""Log an action with secure audit logger"""
|
||||
secure_audit_logger.log(action, details, user)
|
||||
|
||||
|
||||
def verify_audit_integrity() -> Tuple[bool, List[str]]:
|
||||
"""Verify audit log integrity"""
|
||||
return secure_audit_logger.verify_integrity()
|
||||
|
||||
|
||||
def get_audit_logs(limit: int = 50, action_filter: str = None) -> List[Dict]:
|
||||
"""Get audit logs with integrity verification"""
|
||||
return secure_audit_logger.get_logs(limit, action_filter)
|
||||
280
cli/aitbc_cli/utils/security.py
Normal file
280
cli/aitbc_cli/utils/security.py
Normal file
@@ -0,0 +1,280 @@
|
||||
"""
|
||||
Secure Encryption Utilities - Fixed Version
|
||||
Replaces the broken encryption in utils/__init__.py
|
||||
"""
|
||||
|
||||
import base64
|
||||
import hashlib
|
||||
import secrets
|
||||
from typing import Optional, Dict, Any
|
||||
from cryptography.fernet import Fernet, InvalidToken
|
||||
from cryptography.hazmat.primitives.kdf.pbkdf2 import PBKDF2HMAC
|
||||
from cryptography.hazmat.primitives import hashes
|
||||
|
||||
|
||||
def derive_secure_key(password: str, salt: bytes = None) -> tuple[bytes, bytes]:
|
||||
"""
|
||||
Derive secure encryption key using PBKDF2 with SHA-256
|
||||
|
||||
Args:
|
||||
password: User password (required - no defaults)
|
||||
salt: Optional salt (generated if not provided)
|
||||
|
||||
Returns:
|
||||
Tuple of (fernet_key, salt)
|
||||
|
||||
Raises:
|
||||
ValueError: If password is empty or too weak
|
||||
"""
|
||||
if not password or len(password) < 8:
|
||||
raise ValueError("Password must be at least 8 characters long")
|
||||
|
||||
if salt is None:
|
||||
salt = secrets.token_bytes(32)
|
||||
|
||||
kdf = PBKDF2HMAC(
|
||||
algorithm=hashes.SHA256(),
|
||||
length=32,
|
||||
salt=salt,
|
||||
iterations=600_000, # OWASP recommended minimum
|
||||
)
|
||||
|
||||
key = kdf.derive(password.encode())
|
||||
fernet_key = base64.urlsafe_b64encode(key)
|
||||
|
||||
return fernet_key, salt
|
||||
|
||||
|
||||
def encrypt_value(value: str, password: str) -> Dict[str, str]:
|
||||
"""
|
||||
Encrypt a value using PBKDF2 + Fernet (no more hardcoded keys)
|
||||
|
||||
Args:
|
||||
value: Value to encrypt
|
||||
password: Strong password (required)
|
||||
|
||||
Returns:
|
||||
Dict with encrypted data and metadata
|
||||
|
||||
Raises:
|
||||
ValueError: If password is too weak
|
||||
"""
|
||||
if not value:
|
||||
raise ValueError("Cannot encrypt empty value")
|
||||
|
||||
# Derive secure key
|
||||
fernet_key, salt = derive_secure_key(password)
|
||||
|
||||
# Encrypt
|
||||
f = Fernet(fernet_key)
|
||||
encrypted = f.encrypt(value.encode())
|
||||
|
||||
# Fernet already returns base64, no double encoding
|
||||
return {
|
||||
"encrypted_data": encrypted.decode(),
|
||||
"salt": base64.b64encode(salt).decode(),
|
||||
"algorithm": "PBKDF2-SHA256-Fernet",
|
||||
"iterations": 600_000,
|
||||
"version": "1.0"
|
||||
}
|
||||
|
||||
|
||||
def decrypt_value(encrypted_data: Dict[str, str] | str, password: str) -> str:
|
||||
"""
|
||||
Decrypt a PBKDF2 + Fernet encrypted value
|
||||
|
||||
Args:
|
||||
encrypted_data: Dict with encrypted data or legacy string
|
||||
password: Password used for encryption
|
||||
|
||||
Returns:
|
||||
Decrypted value
|
||||
|
||||
Raises:
|
||||
ValueError: If decryption fails or password is wrong
|
||||
InvalidToken: If the encrypted data is corrupted
|
||||
"""
|
||||
# Handle legacy format (backward compatibility)
|
||||
if isinstance(encrypted_data, str):
|
||||
# This is the old broken format - we can't decrypt it securely
|
||||
raise ValueError(
|
||||
"Legacy encrypted format detected. "
|
||||
"This data was encrypted with a broken implementation and cannot be securely recovered. "
|
||||
"Please recreate the wallet with proper encryption."
|
||||
)
|
||||
|
||||
try:
|
||||
# Extract salt and encrypted data
|
||||
salt = base64.b64decode(encrypted_data["salt"])
|
||||
encrypted = encrypted_data["encrypted_data"].encode()
|
||||
|
||||
# Derive same key
|
||||
fernet_key, _ = derive_secure_key(password, salt)
|
||||
|
||||
# Decrypt
|
||||
f = Fernet(fernet_key)
|
||||
decrypted = f.decrypt(encrypted)
|
||||
|
||||
return decrypted.decode()
|
||||
except InvalidToken:
|
||||
raise ValueError("Invalid password or corrupted encrypted data")
|
||||
except Exception as e:
|
||||
raise ValueError(f"Decryption failed: {str(e)}")
|
||||
|
||||
|
||||
def validate_password_strength(password: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Validate password strength
|
||||
|
||||
Args:
|
||||
password: Password to validate
|
||||
|
||||
Returns:
|
||||
Dict with validation results
|
||||
"""
|
||||
issues = []
|
||||
score = 0
|
||||
|
||||
if len(password) < 8:
|
||||
issues.append("Password must be at least 8 characters")
|
||||
else:
|
||||
score += 1
|
||||
|
||||
if len(password) < 12:
|
||||
issues.append("Consider using 12+ characters for better security")
|
||||
else:
|
||||
score += 1
|
||||
|
||||
if not any(c.isupper() for c in password):
|
||||
issues.append("Include uppercase letters")
|
||||
else:
|
||||
score += 1
|
||||
|
||||
if not any(c.islower() for c in password):
|
||||
issues.append("Include lowercase letters")
|
||||
else:
|
||||
score += 1
|
||||
|
||||
if not any(c.isdigit() for c in password):
|
||||
issues.append("Include numbers")
|
||||
else:
|
||||
score += 1
|
||||
|
||||
if not any(c in "!@#$%^&*()_+-=[]{}|;:,.<>?" for c in password):
|
||||
issues.append("Include special characters")
|
||||
else:
|
||||
score += 1
|
||||
|
||||
# Check for common patterns
|
||||
if password.lower() in ["password", "123456", "qwerty", "admin"]:
|
||||
issues.append("Avoid common passwords")
|
||||
score = 0
|
||||
|
||||
strength_levels = {
|
||||
0: "Very Weak",
|
||||
1: "Weak",
|
||||
2: "Fair",
|
||||
3: "Good",
|
||||
4: "Strong",
|
||||
5: "Very Strong",
|
||||
6: "Excellent"
|
||||
}
|
||||
|
||||
return {
|
||||
"score": score,
|
||||
"strength": strength_levels.get(score, "Unknown"),
|
||||
"issues": issues,
|
||||
"is_acceptable": score >= 3
|
||||
}
|
||||
|
||||
|
||||
def generate_secure_password(length: int = 16) -> str:
|
||||
"""
|
||||
Generate a secure random password
|
||||
|
||||
Args:
|
||||
length: Password length
|
||||
|
||||
Returns:
|
||||
Secure random password
|
||||
"""
|
||||
alphabet = (
|
||||
"abcdefghijklmnopqrstuvwxyz"
|
||||
"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
|
||||
"0123456789"
|
||||
"!@#$%^&*()_+-=[]{}|;:,.<>?"
|
||||
)
|
||||
|
||||
password = ''.join(secrets.choice(alphabet) for _ in range(length))
|
||||
|
||||
# Ensure it meets minimum requirements
|
||||
while not validate_password_strength(password)["is_acceptable"]:
|
||||
password = ''.join(secrets.choice(alphabet) for _ in range(length))
|
||||
|
||||
return password
|
||||
|
||||
|
||||
# Migration helper for existing wallets
|
||||
def migrate_legacy_wallet(legacy_data: Dict[str, Any], new_password: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Migrate a wallet from broken encryption to secure encryption
|
||||
|
||||
Args:
|
||||
legacy_data: Legacy wallet data with broken encryption
|
||||
new_password: New strong password
|
||||
|
||||
Returns:
|
||||
Migrated wallet data
|
||||
|
||||
Raises:
|
||||
ValueError: If migration cannot be performed safely
|
||||
"""
|
||||
# Check if this is legacy format
|
||||
if "encrypted" not in legacy_data or not legacy_data.get("encrypted"):
|
||||
raise ValueError("Not a legacy encrypted wallet")
|
||||
|
||||
if "private_key" not in legacy_data:
|
||||
raise ValueError("Cannot migrate wallet without private key")
|
||||
|
||||
# The legacy wallet might have a plaintext private key
|
||||
# If it's truly encrypted with the broken method, we cannot recover it
|
||||
private_key = legacy_data["private_key"]
|
||||
|
||||
if private_key.startswith("[ENCRYPTED_MOCK]") or private_key.startswith("["):
|
||||
# This was never actually encrypted - it's a mock
|
||||
raise ValueError(
|
||||
"Cannot migrate mock wallet. "
|
||||
"Please create a new wallet with proper key generation."
|
||||
)
|
||||
|
||||
# If we get here, we have a plaintext private key (security issue!)
|
||||
# Re-encrypt it properly
|
||||
try:
|
||||
encrypted_data = encrypt_value(private_key, new_password)
|
||||
|
||||
return {
|
||||
**legacy_data,
|
||||
"private_key": encrypted_data,
|
||||
"encryption_version": "1.0",
|
||||
"migration_timestamp": secrets.token_hex(16)
|
||||
}
|
||||
except Exception as e:
|
||||
raise ValueError(f"Migration failed: {str(e)}")
|
||||
|
||||
|
||||
# Security constants
|
||||
class EncryptionConfig:
|
||||
"""Encryption configuration constants"""
|
||||
|
||||
PBKDF2_ITERATIONS = 600_000
|
||||
SALT_LENGTH = 32
|
||||
MIN_PASSWORD_LENGTH = 8
|
||||
RECOMMENDED_PASSWORD_LENGTH = 16
|
||||
|
||||
# Algorithm identifiers
|
||||
ALGORITHM_PBKDF2_FERNET = "PBKDF2-SHA256-Fernet"
|
||||
ALGORITHM_LEGACY = "LEGACY-BROKEN"
|
||||
|
||||
# Version tracking
|
||||
CURRENT_VERSION = "1.0"
|
||||
LEGACY_VERSIONS = ["0.9", "legacy", "broken"]
|
||||
Reference in New Issue
Block a user