chore: remove outdated documentation and reference files
Some checks failed
AITBC CI/CD Pipeline / lint-and-test (3.11) (push) Has been cancelled
AITBC CI/CD Pipeline / lint-and-test (3.12) (push) Has been cancelled
AITBC CI/CD Pipeline / lint-and-test (3.13) (push) Has been cancelled
AITBC CI/CD Pipeline / test-cli (push) Has been cancelled
AITBC CI/CD Pipeline / test-services (push) Has been cancelled
AITBC CI/CD Pipeline / test-production-services (push) Has been cancelled
AITBC CI/CD Pipeline / security-scan (push) Has been cancelled
AITBC CI/CD Pipeline / build (push) Has been cancelled
AITBC CI/CD Pipeline / deploy-staging (push) Has been cancelled
AITBC CI/CD Pipeline / deploy-production (push) Has been cancelled
AITBC CI/CD Pipeline / performance-test (push) Has been cancelled
AITBC CI/CD Pipeline / docs (push) Has been cancelled
AITBC CI/CD Pipeline / release (push) Has been cancelled
AITBC CI/CD Pipeline / notify (push) Has been cancelled
Security Scanning / Bandit Security Scan (apps/coordinator-api/src) (push) Has been cancelled
Security Scanning / Bandit Security Scan (cli/aitbc_cli) (push) Has been cancelled
Security Scanning / Bandit Security Scan (packages/py/aitbc-core/src) (push) Has been cancelled
Security Scanning / Bandit Security Scan (packages/py/aitbc-crypto/src) (push) Has been cancelled
Security Scanning / Bandit Security Scan (packages/py/aitbc-sdk/src) (push) Has been cancelled
Security Scanning / Bandit Security Scan (tests) (push) Has been cancelled
Security Scanning / CodeQL Security Analysis (javascript) (push) Has been cancelled
Security Scanning / CodeQL Security Analysis (python) (push) Has been cancelled
Security Scanning / Dependency Security Scan (push) Has been cancelled
Security Scanning / Container Security Scan (push) Has been cancelled
Security Scanning / OSSF Scorecard (push) Has been cancelled
Security Scanning / Security Summary Report (push) Has been cancelled
AITBC CLI Level 1 Commands Test / test-cli-level1 (3.11) (push) Has been cancelled
AITBC CLI Level 1 Commands Test / test-cli-level1 (3.12) (push) Has been cancelled
AITBC CLI Level 1 Commands Test / test-cli-level1 (3.13) (push) Has been cancelled
AITBC CLI Level 1 Commands Test / test-summary (push) Has been cancelled
Some checks failed
AITBC CI/CD Pipeline / lint-and-test (3.11) (push) Has been cancelled
AITBC CI/CD Pipeline / lint-and-test (3.12) (push) Has been cancelled
AITBC CI/CD Pipeline / lint-and-test (3.13) (push) Has been cancelled
AITBC CI/CD Pipeline / test-cli (push) Has been cancelled
AITBC CI/CD Pipeline / test-services (push) Has been cancelled
AITBC CI/CD Pipeline / test-production-services (push) Has been cancelled
AITBC CI/CD Pipeline / security-scan (push) Has been cancelled
AITBC CI/CD Pipeline / build (push) Has been cancelled
AITBC CI/CD Pipeline / deploy-staging (push) Has been cancelled
AITBC CI/CD Pipeline / deploy-production (push) Has been cancelled
AITBC CI/CD Pipeline / performance-test (push) Has been cancelled
AITBC CI/CD Pipeline / docs (push) Has been cancelled
AITBC CI/CD Pipeline / release (push) Has been cancelled
AITBC CI/CD Pipeline / notify (push) Has been cancelled
Security Scanning / Bandit Security Scan (apps/coordinator-api/src) (push) Has been cancelled
Security Scanning / Bandit Security Scan (cli/aitbc_cli) (push) Has been cancelled
Security Scanning / Bandit Security Scan (packages/py/aitbc-core/src) (push) Has been cancelled
Security Scanning / Bandit Security Scan (packages/py/aitbc-crypto/src) (push) Has been cancelled
Security Scanning / Bandit Security Scan (packages/py/aitbc-sdk/src) (push) Has been cancelled
Security Scanning / Bandit Security Scan (tests) (push) Has been cancelled
Security Scanning / CodeQL Security Analysis (javascript) (push) Has been cancelled
Security Scanning / CodeQL Security Analysis (python) (push) Has been cancelled
Security Scanning / Dependency Security Scan (push) Has been cancelled
Security Scanning / Container Security Scan (push) Has been cancelled
Security Scanning / OSSF Scorecard (push) Has been cancelled
Security Scanning / Security Summary Report (push) Has been cancelled
AITBC CLI Level 1 Commands Test / test-cli-level1 (3.11) (push) Has been cancelled
AITBC CLI Level 1 Commands Test / test-cli-level1 (3.12) (push) Has been cancelled
AITBC CLI Level 1 Commands Test / test-cli-level1 (3.13) (push) Has been cancelled
AITBC CLI Level 1 Commands Test / test-summary (push) Has been cancelled
- Remove debugging service documentation (DEBUgging_SERVICES.md) - Remove development logs policy and quick reference guides - Remove E2E test creation summary - Remove gift certificate example file - Remove GitHub pull summary documentation
This commit is contained in:
33
tests/testing/README.md
Normal file
33
tests/testing/README.md
Normal file
@@ -0,0 +1,33 @@
|
||||
# Testing Scripts
|
||||
|
||||
This directory contains various test scripts and utilities for testing the AITBC platform.
|
||||
|
||||
## Test Scripts
|
||||
|
||||
### Block Import Tests
|
||||
- **test_block_import.py** - Main block import endpoint test
|
||||
- **test_block_import_complete.py** - Comprehensive block import test suite
|
||||
- **test_simple_import.py** - Simple block import test
|
||||
- **test_tx_import.py** - Transaction import test
|
||||
- **test_tx_model.py** - Transaction model validation test
|
||||
- **test_minimal.py** - Minimal test case
|
||||
- **test_model_validation.py** - Model validation test
|
||||
|
||||
### Payment Tests
|
||||
- **test_payment_integration.py** - Payment integration test suite
|
||||
- **test_payment_local.py** - Local payment testing
|
||||
|
||||
### Test Runners
|
||||
- **run_test_suite.py** - Main test suite runner
|
||||
- **run_tests.py** - Simple test runner
|
||||
- **verify_windsurf_tests.py** - Verify Windsurf test configuration
|
||||
- **register_test_clients.py** - Register test clients for testing
|
||||
|
||||
## Usage
|
||||
|
||||
Most test scripts can be run directly with Python:
|
||||
```bash
|
||||
python3 test_block_import.py
|
||||
```
|
||||
|
||||
Some scripts may require specific environment setup or configuration.
|
||||
56
tests/testing/register_test_clients.py
Executable file
56
tests/testing/register_test_clients.py
Executable file
@@ -0,0 +1,56 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Register test clients for payment integration testing"""
|
||||
|
||||
import asyncio
|
||||
import httpx
|
||||
import json
|
||||
|
||||
# Configuration
|
||||
COORDINATOR_URL = "http://127.0.0.1:8000/v1"
|
||||
CLIENT_KEY = "test_client_key_123"
|
||||
MINER_KEY = "${MINER_API_KEY}"
|
||||
|
||||
async def register_client():
|
||||
"""Register a test client"""
|
||||
async with httpx.AsyncClient() as client:
|
||||
# Register client
|
||||
response = await client.post(
|
||||
f"{COORDINATOR_URL}/clients/register",
|
||||
headers={"X-API-Key": CLIENT_KEY},
|
||||
json={"name": "Test Client", "description": "Client for payment testing"}
|
||||
)
|
||||
print(f"Client registration: {response.status_code}")
|
||||
if response.status_code not in [200, 201]:
|
||||
print(f"Response: {response.text}")
|
||||
else:
|
||||
print("✓ Test client registered successfully")
|
||||
|
||||
async def register_miner():
|
||||
"""Register a test miner"""
|
||||
async with httpx.AsyncClient() as client:
|
||||
# Register miner
|
||||
response = await client.post(
|
||||
f"{COORDINATOR_URL}/miners/register",
|
||||
headers={"X-API-Key": MINER_KEY},
|
||||
json={
|
||||
"name": "Test Miner",
|
||||
"description": "Miner for payment testing",
|
||||
"capacity": 100,
|
||||
"price_per_hour": 0.1,
|
||||
"hardware": {"gpu": "RTX 4090", "memory": "24GB"}
|
||||
}
|
||||
)
|
||||
print(f"Miner registration: {response.status_code}")
|
||||
if response.status_code not in [200, 201]:
|
||||
print(f"Response: {response.text}")
|
||||
else:
|
||||
print("✓ Test miner registered successfully")
|
||||
|
||||
async def main():
|
||||
print("=== Registering Test Clients ===")
|
||||
await register_client()
|
||||
await register_miner()
|
||||
print("\n✅ Test clients registered successfully!")
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
146
tests/testing/run_test_suite.py
Executable file
146
tests/testing/run_test_suite.py
Executable file
@@ -0,0 +1,146 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test suite runner for AITBC
|
||||
"""
|
||||
|
||||
import sys
|
||||
import argparse
|
||||
import subprocess
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def run_command(cmd, description):
|
||||
"""Run a command and handle errors"""
|
||||
print(f"\n{'='*60}")
|
||||
print(f"Running: {description}")
|
||||
print(f"Command: {' '.join(cmd)}")
|
||||
print('='*60)
|
||||
|
||||
result = subprocess.run(cmd, capture_output=True, text=True)
|
||||
|
||||
if result.stdout:
|
||||
print(result.stdout)
|
||||
|
||||
if result.stderr:
|
||||
print("STDERR:", result.stderr)
|
||||
|
||||
return result.returncode == 0
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(description="AITBC Test Suite Runner")
|
||||
parser.add_argument(
|
||||
"--suite",
|
||||
choices=["unit", "integration", "e2e", "security", "all"],
|
||||
default="all",
|
||||
help="Test suite to run"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--coverage",
|
||||
action="store_true",
|
||||
help="Generate coverage report"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--parallel",
|
||||
action="store_true",
|
||||
help="Run tests in parallel"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--verbose",
|
||||
action="store_true",
|
||||
help="Verbose output"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--marker",
|
||||
help="Run tests with specific marker (e.g., unit, integration)"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--file",
|
||||
help="Run specific test file"
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
# Base pytest command
|
||||
pytest_cmd = ["python", "-m", "pytest"]
|
||||
|
||||
# Add verbosity
|
||||
if args.verbose:
|
||||
pytest_cmd.append("-v")
|
||||
|
||||
# Add coverage if requested
|
||||
if args.coverage:
|
||||
pytest_cmd.extend([
|
||||
"--cov=apps",
|
||||
"--cov-report=html:htmlcov",
|
||||
"--cov-report=term-missing"
|
||||
])
|
||||
|
||||
# Add parallel execution if requested
|
||||
if args.parallel:
|
||||
pytest_cmd.extend(["-n", "auto"])
|
||||
|
||||
# Determine which tests to run
|
||||
test_paths = []
|
||||
|
||||
if args.file:
|
||||
test_paths.append(args.file)
|
||||
elif args.marker:
|
||||
pytest_cmd.extend(["-m", args.marker])
|
||||
elif args.suite == "unit":
|
||||
test_paths.append("tests/unit/")
|
||||
elif args.suite == "integration":
|
||||
test_paths.append("tests/integration/")
|
||||
elif args.suite == "e2e":
|
||||
test_paths.append("tests/e2e/")
|
||||
# E2E tests might need additional setup
|
||||
pytest_cmd.extend(["--driver=Chrome"])
|
||||
elif args.suite == "security":
|
||||
pytest_cmd.extend(["-m", "security"])
|
||||
else: # all
|
||||
test_paths.append("tests/")
|
||||
|
||||
# Add test paths to command
|
||||
pytest_cmd.extend(test_paths)
|
||||
|
||||
# Add pytest configuration
|
||||
pytest_cmd.extend([
|
||||
"--tb=short",
|
||||
"--strict-markers",
|
||||
"--disable-warnings"
|
||||
])
|
||||
|
||||
# Run the tests
|
||||
success = run_command(pytest_cmd, f"{args.suite.title()} Test Suite")
|
||||
|
||||
if success:
|
||||
print(f"\n✅ {args.suite.title()} tests passed!")
|
||||
|
||||
if args.coverage:
|
||||
print("\n📊 Coverage report generated in htmlcov/index.html")
|
||||
else:
|
||||
print(f"\n❌ {args.suite.title()} tests failed!")
|
||||
sys.exit(1)
|
||||
|
||||
# Additional checks
|
||||
if args.suite in ["all", "integration"]:
|
||||
print("\n🔍 Running integration test checks...")
|
||||
# Add any integration-specific checks here
|
||||
|
||||
if args.suite in ["all", "e2e"]:
|
||||
print("\n🌐 Running E2E test checks...")
|
||||
# Add any E2E-specific checks here
|
||||
|
||||
if args.suite in ["all", "security"]:
|
||||
print("\n🔒 Running security scan...")
|
||||
# Run security scan
|
||||
security_cmd = ["bandit", "-r", "apps/"]
|
||||
run_command(security_cmd, "Security Scan")
|
||||
|
||||
# Run dependency check
|
||||
deps_cmd = ["safety", "check"]
|
||||
run_command(deps_cmd, "Dependency Security Check")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
26
tests/testing/run_tests.py
Executable file
26
tests/testing/run_tests.py
Executable file
@@ -0,0 +1,26 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Wrapper script to run pytest with proper Python path configuration
|
||||
"""
|
||||
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
# Add project root to sys.path
|
||||
project_root = Path(__file__).parent
|
||||
sys.path.insert(0, str(project_root))
|
||||
|
||||
# Add package source directories
|
||||
sys.path.insert(0, str(project_root / "packages" / "py" / "aitbc-core" / "src"))
|
||||
sys.path.insert(0, str(project_root / "packages" / "py" / "aitbc-crypto" / "src"))
|
||||
sys.path.insert(0, str(project_root / "packages" / "py" / "aitbc-p2p" / "src"))
|
||||
sys.path.insert(0, str(project_root / "packages" / "py" / "aitbc-sdk" / "src"))
|
||||
|
||||
# Add app source directories
|
||||
sys.path.insert(0, str(project_root / "apps" / "coordinator-api" / "src"))
|
||||
sys.path.insert(0, str(project_root / "apps" / "wallet-daemon" / "src"))
|
||||
sys.path.insert(0, str(project_root / "apps" / "blockchain-node" / "src"))
|
||||
|
||||
# Run pytest with the original arguments
|
||||
import pytest
|
||||
sys.exit(pytest.main())
|
||||
203
tests/testing/test_block_import.py
Executable file
203
tests/testing/test_block_import.py
Executable file
@@ -0,0 +1,203 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test script for block import endpoint
|
||||
Tests the /rpc/blocks/import POST endpoint functionality
|
||||
"""
|
||||
|
||||
import json
|
||||
import hashlib
|
||||
from datetime import datetime
|
||||
|
||||
# Test configuration
|
||||
BASE_URL = "https://aitbc.bubuit.net/rpc"
|
||||
CHAIN_ID = "ait-devnet"
|
||||
|
||||
def compute_block_hash(height, parent_hash, timestamp):
|
||||
"""Compute block hash using the same algorithm as PoA proposer"""
|
||||
payload = f"{CHAIN_ID}|{height}|{parent_hash}|{timestamp}".encode()
|
||||
return "0x" + hashlib.sha256(payload).hexdigest()
|
||||
|
||||
def test_block_import():
|
||||
"""Test the block import endpoint with various scenarios"""
|
||||
import requests
|
||||
|
||||
print("Testing Block Import Endpoint")
|
||||
print("=" * 50)
|
||||
|
||||
# Test 1: Invalid height (0)
|
||||
print("\n1. Testing invalid height (0)...")
|
||||
response = requests.post(
|
||||
f"{BASE_URL}/blocks/import",
|
||||
json={
|
||||
"height": 0,
|
||||
"hash": "0x123",
|
||||
"parent_hash": "0x00",
|
||||
"proposer": "test",
|
||||
"timestamp": "2026-01-29T10:20:00",
|
||||
"tx_count": 0
|
||||
}
|
||||
)
|
||||
print(f"Status: {response.status_code}")
|
||||
print(f"Response: {response.json()}")
|
||||
assert response.status_code == 422, "Should return validation error for height 0"
|
||||
print("✓ Correctly rejected height 0")
|
||||
|
||||
# Test 2: Block already exists with different hash
|
||||
print("\n2. Testing block conflict...")
|
||||
response = requests.post(
|
||||
f"{BASE_URL}/blocks/import",
|
||||
json={
|
||||
"height": 1,
|
||||
"hash": "0xinvalidhash",
|
||||
"parent_hash": "0x00",
|
||||
"proposer": "test",
|
||||
"timestamp": "2026-01-29T10:20:00",
|
||||
"tx_count": 0
|
||||
}
|
||||
)
|
||||
print(f"Status: {response.status_code}")
|
||||
print(f"Response: {response.json()}")
|
||||
assert response.status_code == 409, "Should return conflict for existing height with different hash"
|
||||
print("✓ Correctly detected block conflict")
|
||||
|
||||
# Test 3: Import existing block with correct hash
|
||||
print("\n3. Testing import of existing block with correct hash...")
|
||||
# Get actual block data
|
||||
response = requests.get(f"{BASE_URL}/blocks/1")
|
||||
block_data = response.json()
|
||||
|
||||
response = requests.post(
|
||||
f"{BASE_URL}/blocks/import",
|
||||
json={
|
||||
"height": block_data["height"],
|
||||
"hash": block_data["hash"],
|
||||
"parent_hash": block_data["parent_hash"],
|
||||
"proposer": block_data["proposer"],
|
||||
"timestamp": block_data["timestamp"],
|
||||
"tx_count": block_data["tx_count"]
|
||||
}
|
||||
)
|
||||
print(f"Status: {response.status_code}")
|
||||
print(f"Response: {response.json()}")
|
||||
assert response.status_code == 200, "Should accept existing block with correct hash"
|
||||
assert response.json()["status"] == "exists", "Should return 'exists' status"
|
||||
print("✓ Correctly handled existing block")
|
||||
|
||||
# Test 4: Invalid block hash (with valid parent)
|
||||
print("\n4. Testing invalid block hash...")
|
||||
# Get current head to use as parent
|
||||
response = requests.get(f"{BASE_URL}/head")
|
||||
head = response.json()
|
||||
|
||||
timestamp = "2026-01-29T10:20:00"
|
||||
parent_hash = head["hash"] # Use actual parent hash
|
||||
height = head["height"] + 1000 # Use high height to avoid conflicts
|
||||
invalid_hash = "0xinvalid"
|
||||
|
||||
response = requests.post(
|
||||
f"{BASE_URL}/blocks/import",
|
||||
json={
|
||||
"height": height,
|
||||
"hash": invalid_hash,
|
||||
"parent_hash": parent_hash,
|
||||
"proposer": "test",
|
||||
"timestamp": timestamp,
|
||||
"tx_count": 0
|
||||
}
|
||||
)
|
||||
print(f"Status: {response.status_code}")
|
||||
print(f"Response: {response.json()}")
|
||||
assert response.status_code == 400, "Should reject invalid hash"
|
||||
assert "Invalid block hash" in response.json()["detail"], "Should mention invalid hash"
|
||||
print("✓ Correctly rejected invalid hash")
|
||||
|
||||
# Test 5: Valid hash but parent not found
|
||||
print("\n5. Testing valid hash but parent not found...")
|
||||
height = head["height"] + 2000 # Use different height
|
||||
parent_hash = "0xnonexistentparent"
|
||||
timestamp = "2026-01-29T10:20:00"
|
||||
valid_hash = compute_block_hash(height, parent_hash, timestamp)
|
||||
|
||||
response = requests.post(
|
||||
f"{BASE_URL}/blocks/import",
|
||||
json={
|
||||
"height": height,
|
||||
"hash": valid_hash,
|
||||
"parent_hash": parent_hash,
|
||||
"proposer": "test",
|
||||
"timestamp": timestamp,
|
||||
"tx_count": 0
|
||||
}
|
||||
)
|
||||
print(f"Status: {response.status_code}")
|
||||
print(f"Response: {response.json()}")
|
||||
assert response.status_code == 400, "Should reject when parent not found"
|
||||
assert "Parent block not found" in response.json()["detail"], "Should mention parent not found"
|
||||
print("✓ Correctly rejected missing parent")
|
||||
|
||||
# Test 6: Valid block with transactions and receipts
|
||||
print("\n6. Testing valid block with transactions...")
|
||||
# Get current head to use as parent
|
||||
response = requests.get(f"{BASE_URL}/head")
|
||||
head = response.json()
|
||||
|
||||
height = head["height"] + 1
|
||||
parent_hash = head["hash"]
|
||||
timestamp = datetime.utcnow().isoformat() + "Z"
|
||||
valid_hash = compute_block_hash(height, parent_hash, timestamp)
|
||||
|
||||
test_block = {
|
||||
"height": height,
|
||||
"hash": valid_hash,
|
||||
"parent_hash": parent_hash,
|
||||
"proposer": "test-proposer",
|
||||
"timestamp": timestamp,
|
||||
"tx_count": 1,
|
||||
"transactions": [{
|
||||
"tx_hash": f"0xtx{height}",
|
||||
"sender": "0xsender",
|
||||
"recipient": "0xreceiver",
|
||||
"payload": {"to": "0xreceiver", "amount": 1000000}
|
||||
}],
|
||||
"receipts": [{
|
||||
"receipt_id": f"rx{height}",
|
||||
"job_id": f"job{height}",
|
||||
"payload": {"result": "success"},
|
||||
"miner_signature": "0xminer",
|
||||
"coordinator_attestations": ["0xatt1"],
|
||||
"minted_amount": 100,
|
||||
"recorded_at": timestamp
|
||||
}]
|
||||
}
|
||||
|
||||
response = requests.post(
|
||||
f"{BASE_URL}/blocks/import",
|
||||
json=test_block
|
||||
)
|
||||
print(f"Status: {response.status_code}")
|
||||
print(f"Response: {response.json()}")
|
||||
assert response.status_code == 200, "Should accept valid block with transactions"
|
||||
assert response.json()["status"] == "imported", "Should return 'imported' status"
|
||||
print("✓ Successfully imported block with transactions")
|
||||
|
||||
# Verify the block was imported
|
||||
print("\n7. Verifying imported block...")
|
||||
response = requests.get(f"{BASE_URL}/blocks/{height}")
|
||||
assert response.status_code == 200, "Should be able to retrieve imported block"
|
||||
imported_block = response.json()
|
||||
assert imported_block["hash"] == valid_hash, "Hash should match"
|
||||
assert imported_block["tx_count"] == 1, "Should have 1 transaction"
|
||||
print("✓ Block successfully imported and retrievable")
|
||||
|
||||
print("\n" + "=" * 50)
|
||||
print("All tests passed! ✅")
|
||||
print("\nBlock import endpoint is fully functional with:")
|
||||
print("- ✓ Input validation")
|
||||
print("- ✓ Hash validation")
|
||||
print("- ✓ Parent block verification")
|
||||
print("- ✓ Conflict detection")
|
||||
print("- ✓ Transaction and receipt import")
|
||||
print("- ✓ Proper error handling")
|
||||
|
||||
if __name__ == "__main__":
|
||||
test_block_import()
|
||||
224
tests/testing/test_block_import_complete.py
Executable file
224
tests/testing/test_block_import_complete.py
Executable file
@@ -0,0 +1,224 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Comprehensive test for block import endpoint
|
||||
Tests all functionality including validation, conflicts, and transaction import
|
||||
"""
|
||||
|
||||
import json
|
||||
import hashlib
|
||||
import requests
|
||||
from datetime import datetime
|
||||
|
||||
BASE_URL = "https://aitbc.bubuit.net/rpc"
|
||||
CHAIN_ID = "ait-devnet"
|
||||
|
||||
def compute_block_hash(height, parent_hash, timestamp):
|
||||
"""Compute block hash using the same algorithm as PoA proposer"""
|
||||
payload = f"{CHAIN_ID}|{height}|{parent_hash}|{timestamp}".encode()
|
||||
return "0x" + hashlib.sha256(payload).hexdigest()
|
||||
|
||||
def test_block_import_complete():
|
||||
"""Complete test suite for block import endpoint"""
|
||||
|
||||
print("=" * 60)
|
||||
print("BLOCK IMPORT ENDPOINT TEST SUITE")
|
||||
print("=" * 60)
|
||||
|
||||
results = []
|
||||
|
||||
# Test 1: Invalid height (0)
|
||||
print("\n[TEST 1] Invalid height (0)...")
|
||||
response = requests.post(
|
||||
f"{BASE_URL}/blocks/import",
|
||||
json={
|
||||
"height": 0,
|
||||
"hash": "0x123",
|
||||
"parent_hash": "0x00",
|
||||
"proposer": "test",
|
||||
"timestamp": "2026-01-29T10:20:00",
|
||||
"tx_count": 0
|
||||
}
|
||||
)
|
||||
if response.status_code == 422 and "greater_than" in response.json()["detail"][0]["msg"]:
|
||||
print("✅ PASS: Correctly rejected height 0")
|
||||
results.append(True)
|
||||
else:
|
||||
print(f"❌ FAIL: Expected 422, got {response.status_code}")
|
||||
results.append(False)
|
||||
|
||||
# Test 2: Block conflict
|
||||
print("\n[TEST 2] Block conflict...")
|
||||
response = requests.post(
|
||||
f"{BASE_URL}/blocks/import",
|
||||
json={
|
||||
"height": 1,
|
||||
"hash": "0xinvalidhash",
|
||||
"parent_hash": "0x00",
|
||||
"proposer": "test",
|
||||
"timestamp": "2026-01-29T10:20:00",
|
||||
"tx_count": 0
|
||||
}
|
||||
)
|
||||
if response.status_code == 409 and "already exists with different hash" in response.json()["detail"]:
|
||||
print("✅ PASS: Correctly detected block conflict")
|
||||
results.append(True)
|
||||
else:
|
||||
print(f"❌ FAIL: Expected 409, got {response.status_code}")
|
||||
results.append(False)
|
||||
|
||||
# Test 3: Import existing block with correct hash
|
||||
print("\n[TEST 3] Import existing block with correct hash...")
|
||||
response = requests.get(f"{BASE_URL}/blocks/1")
|
||||
block_data = response.json()
|
||||
|
||||
response = requests.post(
|
||||
f"{BASE_URL}/blocks/import",
|
||||
json={
|
||||
"height": block_data["height"],
|
||||
"hash": block_data["hash"],
|
||||
"parent_hash": block_data["parent_hash"],
|
||||
"proposer": block_data["proposer"],
|
||||
"timestamp": block_data["timestamp"],
|
||||
"tx_count": block_data["tx_count"]
|
||||
}
|
||||
)
|
||||
if response.status_code == 200 and response.json()["status"] == "exists":
|
||||
print("✅ PASS: Correctly handled existing block")
|
||||
results.append(True)
|
||||
else:
|
||||
print(f"❌ FAIL: Expected 200 with 'exists' status, got {response.status_code}")
|
||||
results.append(False)
|
||||
|
||||
# Test 4: Invalid block hash
|
||||
print("\n[TEST 4] Invalid block hash...")
|
||||
response = requests.get(f"{BASE_URL}/head")
|
||||
head = response.json()
|
||||
|
||||
response = requests.post(
|
||||
f"{BASE_URL}/blocks/import",
|
||||
json={
|
||||
"height": 999999,
|
||||
"hash": "0xinvalid",
|
||||
"parent_hash": head["hash"],
|
||||
"proposer": "test",
|
||||
"timestamp": "2026-01-29T10:20:00",
|
||||
"tx_count": 0
|
||||
}
|
||||
)
|
||||
if response.status_code == 400 and "Invalid block hash" in response.json()["detail"]:
|
||||
print("✅ PASS: Correctly rejected invalid hash")
|
||||
results.append(True)
|
||||
else:
|
||||
print(f"❌ FAIL: Expected 400, got {response.status_code}")
|
||||
results.append(False)
|
||||
|
||||
# Test 5: Parent not found
|
||||
print("\n[TEST 5] Parent block not found...")
|
||||
response = requests.post(
|
||||
f"{BASE_URL}/blocks/import",
|
||||
json={
|
||||
"height": 999998,
|
||||
"hash": compute_block_hash(999998, "0xnonexistent", "2026-01-29T10:20:00"),
|
||||
"parent_hash": "0xnonexistent",
|
||||
"proposer": "test",
|
||||
"timestamp": "2026-01-29T10:20:00",
|
||||
"tx_count": 0
|
||||
}
|
||||
)
|
||||
if response.status_code == 400 and "Parent block not found" in response.json()["detail"]:
|
||||
print("✅ PASS: Correctly rejected missing parent")
|
||||
results.append(True)
|
||||
else:
|
||||
print(f"❌ FAIL: Expected 400, got {response.status_code}")
|
||||
results.append(False)
|
||||
|
||||
# Test 6: Import block without transactions
|
||||
print("\n[TEST 6] Import block without transactions...")
|
||||
response = requests.get(f"{BASE_URL}/head")
|
||||
head = response.json()
|
||||
|
||||
height = head["height"] + 1
|
||||
block_hash = compute_block_hash(height, head["hash"], "2026-01-29T10:20:00")
|
||||
|
||||
response = requests.post(
|
||||
f"{BASE_URL}/blocks/import",
|
||||
json={
|
||||
"height": height,
|
||||
"hash": block_hash,
|
||||
"parent_hash": head["hash"],
|
||||
"proposer": "test-proposer",
|
||||
"timestamp": "2026-01-29T10:20:00",
|
||||
"tx_count": 0,
|
||||
"transactions": []
|
||||
}
|
||||
)
|
||||
if response.status_code == 200 and response.json()["status"] == "imported":
|
||||
print("✅ PASS: Successfully imported block without transactions")
|
||||
results.append(True)
|
||||
else:
|
||||
print(f"❌ FAIL: Expected 200, got {response.status_code}")
|
||||
results.append(False)
|
||||
|
||||
# Test 7: Import block with transactions (KNOWN ISSUE)
|
||||
print("\n[TEST 7] Import block with transactions...")
|
||||
print("⚠️ KNOWN ISSUE: Transaction import currently fails with database constraint error")
|
||||
print(" This appears to be a bug in the transaction field mapping")
|
||||
|
||||
height = height + 1
|
||||
block_hash = compute_block_hash(height, head["hash"], "2026-01-29T10:20:00")
|
||||
|
||||
response = requests.post(
|
||||
f"{BASE_URL}/blocks/import",
|
||||
json={
|
||||
"height": height,
|
||||
"hash": block_hash,
|
||||
"parent_hash": head["hash"],
|
||||
"proposer": "test-proposer",
|
||||
"timestamp": "2026-01-29T10:20:00",
|
||||
"tx_count": 1,
|
||||
"transactions": [{
|
||||
"tx_hash": "0xtx123",
|
||||
"sender": "0xsender",
|
||||
"recipient": "0xrecipient",
|
||||
"payload": {"test": "data"}
|
||||
}]
|
||||
}
|
||||
)
|
||||
if response.status_code == 500:
|
||||
print("⚠️ EXPECTED FAILURE: Transaction import fails with 500 error")
|
||||
print(" Error: NOT NULL constraint failed on transaction fields")
|
||||
results.append(None) # Known issue, not counting as fail
|
||||
else:
|
||||
print(f"❓ UNEXPECTED: Got {response.status_code} instead of expected 500")
|
||||
results.append(None)
|
||||
|
||||
# Summary
|
||||
print("\n" + "=" * 60)
|
||||
print("TEST SUMMARY")
|
||||
print("=" * 60)
|
||||
|
||||
passed = sum(1 for r in results if r is True)
|
||||
failed = sum(1 for r in results if r is False)
|
||||
known_issues = sum(1 for r in results if r is None)
|
||||
|
||||
print(f"✅ Passed: {passed}")
|
||||
print(f"❌ Failed: {failed}")
|
||||
if known_issues > 0:
|
||||
print(f"⚠️ Known Issues: {known_issues}")
|
||||
|
||||
print("\nFUNCTIONALITY STATUS:")
|
||||
print("- ✅ Input validation (height, hash, parent)")
|
||||
print("- ✅ Conflict detection")
|
||||
print("- ✅ Block import without transactions")
|
||||
print("- ❌ Block import with transactions (database constraint issue)")
|
||||
|
||||
if failed == 0:
|
||||
print("\n🎉 All core functionality is working!")
|
||||
print(" The block import endpoint is functional for basic use.")
|
||||
else:
|
||||
print(f"\n⚠️ {failed} test(s) failed - review required")
|
||||
|
||||
return passed, failed, known_issues
|
||||
|
||||
if __name__ == "__main__":
|
||||
test_block_import_complete()
|
||||
45
tests/testing/test_coordinator.py
Executable file
45
tests/testing/test_coordinator.py
Executable file
@@ -0,0 +1,45 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test GPU registration with mock coordinator
|
||||
"""
|
||||
|
||||
import httpx
|
||||
import json
|
||||
|
||||
COORDINATOR_URL = "http://localhost:8090"
|
||||
|
||||
# Test available endpoints
|
||||
print("=== Testing Mock Coordinator Endpoints ===")
|
||||
endpoints = [
|
||||
"/",
|
||||
"/health",
|
||||
"/metrics",
|
||||
"/miners/register",
|
||||
"/miners/list",
|
||||
"/marketplace/offers"
|
||||
]
|
||||
|
||||
for endpoint in endpoints:
|
||||
try:
|
||||
response = httpx.get(f"{COORDINATOR_URL}{endpoint}", timeout=5)
|
||||
print(f"{endpoint}: {response.status_code}")
|
||||
if response.status_code == 200 and response.text:
|
||||
try:
|
||||
data = response.json()
|
||||
print(f" Response: {json.dumps(data, indent=2)[:200]}...")
|
||||
except:
|
||||
print(f" Response: {response.text[:100]}...")
|
||||
except Exception as e:
|
||||
print(f"{endpoint}: Error - {e}")
|
||||
|
||||
print("\n=== Checking OpenAPI Spec ===")
|
||||
try:
|
||||
response = httpx.get(f"{COORDINATOR_URL}/openapi.json", timeout=5)
|
||||
if response.status_code == 200:
|
||||
openapi = response.json()
|
||||
paths = list(openapi.get("paths", {}).keys())
|
||||
print(f"Available endpoints: {paths}")
|
||||
else:
|
||||
print(f"OpenAPI not available: {response.status_code}")
|
||||
except Exception as e:
|
||||
print(f"Error getting OpenAPI: {e}")
|
||||
63
tests/testing/test_host_miner.py
Executable file
63
tests/testing/test_host_miner.py
Executable file
@@ -0,0 +1,63 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test script for host GPU miner
|
||||
"""
|
||||
|
||||
import subprocess
|
||||
import httpx
|
||||
|
||||
# Test GPU
|
||||
print("Testing GPU access...")
|
||||
result = subprocess.run(['nvidia-smi', '--query-gpu=name', '--format=csv,noheader,nounits'],
|
||||
capture_output=True, text=True)
|
||||
if result.returncode == 0:
|
||||
print(f"✅ GPU detected: {result.stdout.strip()}")
|
||||
else:
|
||||
print("❌ GPU not accessible")
|
||||
|
||||
# Test Ollama
|
||||
print("\nTesting Ollama...")
|
||||
try:
|
||||
response = httpx.get("http://localhost:11434/api/tags", timeout=5)
|
||||
if response.status_code == 200:
|
||||
models = response.json().get('models', [])
|
||||
print(f"✅ Ollama running with {len(models)} models")
|
||||
for m in models[:3]: # Show first 3 models
|
||||
print(f" - {m['name']}")
|
||||
else:
|
||||
print("❌ Ollama not responding")
|
||||
except Exception as e:
|
||||
print(f"❌ Ollama error: {e}")
|
||||
|
||||
# Test Coordinator
|
||||
print("\nTesting Coordinator...")
|
||||
try:
|
||||
response = httpx.get("http://127.0.0.1:8000/v1/health", timeout=5)
|
||||
if response.status_code == 200:
|
||||
print("✅ Coordinator is accessible")
|
||||
else:
|
||||
print("❌ Coordinator not responding")
|
||||
except Exception as e:
|
||||
print(f"❌ Coordinator error: {e}")
|
||||
|
||||
# Test Ollama inference
|
||||
print("\nTesting Ollama inference...")
|
||||
try:
|
||||
response = httpx.post(
|
||||
"http://localhost:11434/api/generate",
|
||||
json={
|
||||
"model": "llama3.2:latest",
|
||||
"prompt": "Say hello",
|
||||
"stream": False
|
||||
},
|
||||
timeout=10
|
||||
)
|
||||
if response.status_code == 200:
|
||||
result = response.json()
|
||||
print(f"✅ Inference successful: {result.get('response', '')[:50]}...")
|
||||
else:
|
||||
print("❌ Inference failed")
|
||||
except Exception as e:
|
||||
print(f"❌ Inference error: {e}")
|
||||
|
||||
print("\n✅ All tests completed!")
|
||||
65
tests/testing/test_minimal.py
Executable file
65
tests/testing/test_minimal.py
Executable file
@@ -0,0 +1,65 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Minimal test to debug transaction import
|
||||
"""
|
||||
|
||||
import json
|
||||
import hashlib
|
||||
import requests
|
||||
|
||||
BASE_URL = "https://aitbc.bubuit.net/rpc"
|
||||
CHAIN_ID = "ait-devnet"
|
||||
|
||||
def compute_block_hash(height, parent_hash, timestamp):
|
||||
"""Compute block hash using the same algorithm as PoA proposer"""
|
||||
payload = f"{CHAIN_ID}|{height}|{parent_hash}|{timestamp}".encode()
|
||||
return "0x" + hashlib.sha256(payload).hexdigest()
|
||||
|
||||
def test_minimal():
|
||||
"""Test with minimal data"""
|
||||
|
||||
# Get current head
|
||||
response = requests.get(f"{BASE_URL}/head")
|
||||
head = response.json()
|
||||
|
||||
# Create a new block
|
||||
height = head["height"] + 1
|
||||
parent_hash = head["hash"]
|
||||
timestamp = "2026-01-29T10:20:00"
|
||||
block_hash = compute_block_hash(height, parent_hash, timestamp)
|
||||
|
||||
# Test with empty transactions list first
|
||||
test_block = {
|
||||
"height": height,
|
||||
"hash": block_hash,
|
||||
"parent_hash": parent_hash,
|
||||
"proposer": "test-proposer",
|
||||
"timestamp": timestamp,
|
||||
"tx_count": 0,
|
||||
"transactions": []
|
||||
}
|
||||
|
||||
print("Testing with empty transactions list...")
|
||||
response = requests.post(f"{BASE_URL}/blocks/import", json=test_block)
|
||||
print(f"Status: {response.status_code}")
|
||||
print(f"Response: {response.json()}")
|
||||
|
||||
if response.status_code == 200:
|
||||
print("\n✅ Empty transactions work!")
|
||||
|
||||
# Now test with one transaction
|
||||
height = height + 1
|
||||
block_hash = compute_block_hash(height, parent_hash, timestamp)
|
||||
|
||||
test_block["height"] = height
|
||||
test_block["hash"] = block_hash
|
||||
test_block["tx_count"] = 1
|
||||
test_block["transactions"] = [{"tx_hash": "0xtest", "sender": "0xtest", "recipient": "0xtest", "payload": {}}]
|
||||
|
||||
print("\nTesting with one transaction...")
|
||||
response = requests.post(f"{BASE_URL}/blocks/import", json=test_block)
|
||||
print(f"Status: {response.status_code}")
|
||||
print(f"Response: {response.json()}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
test_minimal()
|
||||
57
tests/testing/test_model_validation.py
Executable file
57
tests/testing/test_model_validation.py
Executable file
@@ -0,0 +1,57 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test the BlockImportRequest model
|
||||
"""
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
from typing import Dict, Any, List, Optional
|
||||
|
||||
class TransactionData(BaseModel):
|
||||
tx_hash: str
|
||||
sender: str
|
||||
recipient: str
|
||||
payload: Dict[str, Any] = Field(default_factory=dict)
|
||||
|
||||
class BlockImportRequest(BaseModel):
|
||||
height: int = Field(gt=0)
|
||||
hash: str
|
||||
parent_hash: str
|
||||
proposer: str
|
||||
timestamp: str
|
||||
tx_count: int = Field(ge=0)
|
||||
state_root: Optional[str] = None
|
||||
transactions: List[TransactionData] = Field(default_factory=list)
|
||||
|
||||
# Test creating the request
|
||||
test_data = {
|
||||
"height": 1,
|
||||
"hash": "0xtest",
|
||||
"parent_hash": "0x00",
|
||||
"proposer": "test",
|
||||
"timestamp": "2026-01-29T10:20:00",
|
||||
"tx_count": 1,
|
||||
"transactions": [{
|
||||
"tx_hash": "0xtx123",
|
||||
"sender": "0xsender",
|
||||
"recipient": "0xrecipient",
|
||||
"payload": {"test": "data"}
|
||||
}]
|
||||
}
|
||||
|
||||
print("Test data:")
|
||||
print(test_data)
|
||||
|
||||
try:
|
||||
request = BlockImportRequest(**test_data)
|
||||
print("\n✅ Request validated successfully!")
|
||||
print(f"Transactions count: {len(request.transactions)}")
|
||||
if request.transactions:
|
||||
tx = request.transactions[0]
|
||||
print(f"First transaction:")
|
||||
print(f" tx_hash: {tx.tx_hash}")
|
||||
print(f" sender: {tx.sender}")
|
||||
print(f" recipient: {tx.recipient}")
|
||||
except Exception as e:
|
||||
print(f"\n❌ Validation failed: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
317
tests/testing/test_payment_integration.py
Executable file
317
tests/testing/test_payment_integration.py
Executable file
@@ -0,0 +1,317 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test script for AITBC Payment Integration
|
||||
Tests job creation with payments, escrow, release, and refund flows
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import httpx
|
||||
import json
|
||||
import logging
|
||||
from datetime import datetime
|
||||
from typing import Dict, Any
|
||||
|
||||
# Configure logging
|
||||
logging.basicConfig(level=logging.INFO)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Configuration
|
||||
COORDINATOR_URL = "https://aitbc.bubuit.net/api"
|
||||
CLIENT_KEY = "test_client_key_123"
|
||||
MINER_KEY = "${MINER_API_KEY}"
|
||||
|
||||
class PaymentIntegrationTest:
|
||||
def __init__(self):
|
||||
self.client = httpx.Client(timeout=30.0)
|
||||
self.job_id = None
|
||||
self.payment_id = None
|
||||
|
||||
async def test_complete_payment_flow(self):
|
||||
"""Test the complete payment flow from job creation to payment release"""
|
||||
|
||||
logger.info("=== Starting AITBC Payment Integration Test ===")
|
||||
|
||||
# Step 1: Check coordinator health
|
||||
await self.check_health()
|
||||
|
||||
# Step 2: Submit a job with payment
|
||||
await self.submit_job_with_payment()
|
||||
|
||||
# Step 3: Check job status and payment
|
||||
await self.check_job_and_payment_status()
|
||||
|
||||
# Step 4: Simulate job completion by miner
|
||||
await self.complete_job()
|
||||
|
||||
# Step 5: Verify payment was released
|
||||
await self.verify_payment_release()
|
||||
|
||||
# Step 6: Test refund flow with a new job
|
||||
await self.test_refund_flow()
|
||||
|
||||
logger.info("=== Payment Integration Test Complete ===")
|
||||
|
||||
async def check_health(self):
|
||||
"""Check if coordinator API is healthy"""
|
||||
logger.info("Step 1: Checking coordinator health...")
|
||||
|
||||
response = self.client.get(f"{COORDINATOR_URL}/health")
|
||||
|
||||
if response.status_code == 200:
|
||||
logger.info(f"✓ Coordinator healthy: {response.json()}")
|
||||
else:
|
||||
raise Exception(f"Coordinator health check failed: {response.status_code}")
|
||||
|
||||
async def submit_job_with_payment(self):
|
||||
"""Submit a job with AITBC token payment"""
|
||||
logger.info("Step 2: Submitting job with payment...")
|
||||
|
||||
job_data = {
|
||||
"service_type": "llm",
|
||||
"service_params": {
|
||||
"model": "llama3.2",
|
||||
"prompt": "What is AITBC?",
|
||||
"max_tokens": 100
|
||||
},
|
||||
"payment_amount": 1.0,
|
||||
"payment_currency": "AITBC",
|
||||
"escrow_timeout_seconds": 3600
|
||||
}
|
||||
|
||||
headers = {"X-Client-Key": CLIENT_KEY}
|
||||
|
||||
response = self.client.post(
|
||||
f"{COORDINATOR_URL}/v1/jobs",
|
||||
json=job_data,
|
||||
headers=headers
|
||||
)
|
||||
|
||||
if response.status_code == 201:
|
||||
job = response.json()
|
||||
self.job_id = job["job_id"]
|
||||
logger.info(f"✓ Job created with ID: {self.job_id}")
|
||||
logger.info(f" Payment status: {job.get('payment_status', 'N/A')}")
|
||||
else:
|
||||
raise Exception(f"Failed to create job: {response.status_code} - {response.text}")
|
||||
|
||||
async def check_job_and_payment_status(self):
|
||||
"""Check job status and payment details"""
|
||||
logger.info("Step 3: Checking job and payment status...")
|
||||
|
||||
headers = {"X-Client-Key": CLIENT_KEY}
|
||||
|
||||
# Get job status
|
||||
response = self.client.get(
|
||||
f"{COORDINATOR_URL}/v1/jobs/{self.job_id}",
|
||||
headers=headers
|
||||
)
|
||||
|
||||
if response.status_code == 200:
|
||||
job = response.json()
|
||||
logger.info(f"✓ Job status: {job['state']}")
|
||||
logger.info(f" Payment ID: {job.get('payment_id', 'N/A')}")
|
||||
logger.info(f" Payment status: {job.get('payment_status', 'N/A')}")
|
||||
|
||||
self.payment_id = job.get('payment_id')
|
||||
|
||||
# Get payment details if payment_id exists
|
||||
if self.payment_id:
|
||||
payment_response = self.client.get(
|
||||
f"{COORDINATOR_URL}/v1/payments/{self.payment_id}",
|
||||
headers=headers
|
||||
)
|
||||
|
||||
if payment_response.status_code == 200:
|
||||
payment = payment_response.json()
|
||||
logger.info(f"✓ Payment details:")
|
||||
logger.info(f" Amount: {payment['amount']} {payment['currency']}")
|
||||
logger.info(f" Status: {payment['status']}")
|
||||
logger.info(f" Method: {payment['payment_method']}")
|
||||
else:
|
||||
logger.warning(f"Could not fetch payment details: {payment_response.status_code}")
|
||||
else:
|
||||
raise Exception(f"Failed to get job status: {response.status_code}")
|
||||
|
||||
async def complete_job(self):
|
||||
"""Simulate miner completing the job"""
|
||||
logger.info("Step 4: Simulating job completion...")
|
||||
|
||||
# First, poll for the job as miner
|
||||
headers = {"X-Miner-Key": MINER_KEY}
|
||||
|
||||
poll_response = self.client.post(
|
||||
f"{COORDINATOR_URL}/v1/miners/poll",
|
||||
json={"capabilities": ["llm"]},
|
||||
headers=headers
|
||||
)
|
||||
|
||||
if poll_response.status_code == 200:
|
||||
poll_data = poll_response.json()
|
||||
if poll_data.get("job_id") == self.job_id:
|
||||
logger.info(f"✓ Miner received job: {self.job_id}")
|
||||
|
||||
# Submit job result
|
||||
result_data = {
|
||||
"result": json.dumps({
|
||||
"text": "AITBC is a decentralized AI computing marketplace that uses blockchain for payments and zero-knowledge proofs for privacy.",
|
||||
"model": "llama3.2",
|
||||
"tokens_used": 42
|
||||
}),
|
||||
"metrics": {
|
||||
"duration_ms": 2500,
|
||||
"tokens_used": 42,
|
||||
"gpu_seconds": 0.5
|
||||
}
|
||||
}
|
||||
|
||||
submit_response = self.client.post(
|
||||
f"{COORDINATOR_URL}/v1/miners/{self.job_id}/result",
|
||||
json=result_data,
|
||||
headers=headers
|
||||
)
|
||||
|
||||
if submit_response.status_code == 200:
|
||||
logger.info("✓ Job result submitted successfully")
|
||||
logger.info(f" Receipt: {submit_response.json().get('receipt', {}).get('receipt_id', 'N/A')}")
|
||||
else:
|
||||
raise Exception(f"Failed to submit result: {submit_response.status_code}")
|
||||
else:
|
||||
logger.warning(f"Miner received different job: {poll_data.get('job_id')}")
|
||||
else:
|
||||
raise Exception(f"Failed to poll for job: {poll_response.status_code}")
|
||||
|
||||
async def verify_payment_release(self):
|
||||
"""Verify that payment was released after job completion"""
|
||||
logger.info("Step 5: Verifying payment release...")
|
||||
|
||||
# Wait a moment for payment processing
|
||||
await asyncio.sleep(2)
|
||||
|
||||
headers = {"X-Client-Key": CLIENT_KEY}
|
||||
|
||||
# Check updated job status
|
||||
response = self.client.get(
|
||||
f"{COORDINATOR_URL}/v1/jobs/{self.job_id}",
|
||||
headers=headers
|
||||
)
|
||||
|
||||
if response.status_code == 200:
|
||||
job = response.json()
|
||||
logger.info(f"✓ Final job status: {job['state']}")
|
||||
logger.info(f" Final payment status: {job.get('payment_status', 'N/A')}")
|
||||
|
||||
# Get payment receipt
|
||||
if self.payment_id:
|
||||
receipt_response = self.client.get(
|
||||
f"{COORDINATOR_URL}/v1/payments/{self.payment_id}/receipt",
|
||||
headers=headers
|
||||
)
|
||||
|
||||
if receipt_response.status_code == 200:
|
||||
receipt = receipt_response.json()
|
||||
logger.info(f"✓ Payment receipt:")
|
||||
logger.info(f" Status: {receipt['status']}")
|
||||
logger.info(f" Verified at: {receipt.get('verified_at', 'N/A')}")
|
||||
logger.info(f" Transaction hash: {receipt.get('transaction_hash', 'N/A')}")
|
||||
else:
|
||||
logger.warning(f"Could not fetch payment receipt: {receipt_response.status_code}")
|
||||
else:
|
||||
raise Exception(f"Failed to verify payment release: {response.status_code}")
|
||||
|
||||
async def test_refund_flow(self):
|
||||
"""Test payment refund for failed jobs"""
|
||||
logger.info("Step 6: Testing refund flow...")
|
||||
|
||||
# Create a new job that will fail
|
||||
job_data = {
|
||||
"service_type": "llm",
|
||||
"service_params": {
|
||||
"model": "nonexistent_model",
|
||||
"prompt": "This should fail"
|
||||
},
|
||||
"payment_amount": 0.5,
|
||||
"payment_currency": "AITBC"
|
||||
}
|
||||
|
||||
headers = {"X-Client-Key": CLIENT_KEY}
|
||||
|
||||
response = self.client.post(
|
||||
f"{COORDINATOR_URL}/v1/jobs",
|
||||
json=job_data,
|
||||
headers=headers
|
||||
)
|
||||
|
||||
if response.status_code == 201:
|
||||
fail_job = response.json()
|
||||
fail_job_id = fail_job["job_id"]
|
||||
fail_payment_id = fail_job.get("payment_id")
|
||||
|
||||
logger.info(f"✓ Created test job for refund: {fail_job_id}")
|
||||
|
||||
# Simulate job failure
|
||||
fail_headers = {"X-Miner-Key": MINER_KEY}
|
||||
|
||||
# Poll for the job
|
||||
poll_response = self.client.post(
|
||||
f"{COORDINATOR_URL}/v1/miners/poll",
|
||||
json={"capabilities": ["llm"]},
|
||||
headers=fail_headers
|
||||
)
|
||||
|
||||
if poll_response.status_code == 200:
|
||||
poll_data = poll_response.json()
|
||||
if poll_data.get("job_id") == fail_job_id:
|
||||
# Submit failure
|
||||
fail_data = {
|
||||
"error_code": "MODEL_NOT_FOUND",
|
||||
"error_message": "The specified model does not exist"
|
||||
}
|
||||
|
||||
fail_response = self.client.post(
|
||||
f"{COORDINATOR_URL}/v1/miners/{fail_job_id}/fail",
|
||||
json=fail_data,
|
||||
headers=fail_headers
|
||||
)
|
||||
|
||||
if fail_response.status_code == 200:
|
||||
logger.info("✓ Job failure submitted")
|
||||
|
||||
# Wait for refund processing
|
||||
await asyncio.sleep(2)
|
||||
|
||||
# Check refund status
|
||||
if fail_payment_id:
|
||||
payment_response = self.client.get(
|
||||
f"{COORDINATOR_URL}/v1/payments/{fail_payment_id}",
|
||||
headers=headers
|
||||
)
|
||||
|
||||
if payment_response.status_code == 200:
|
||||
payment = payment_response.json()
|
||||
logger.info(f"✓ Payment refunded:")
|
||||
logger.info(f" Status: {payment['status']}")
|
||||
logger.info(f" Refunded at: {payment.get('refunded_at', 'N/A')}")
|
||||
else:
|
||||
logger.warning(f"Could not verify refund: {payment_response.status_code}")
|
||||
else:
|
||||
logger.warning(f"Failed to submit job failure: {fail_response.status_code}")
|
||||
|
||||
logger.info("\n=== Test Summary ===")
|
||||
logger.info("✓ Job creation with payment")
|
||||
logger.info("✓ Payment escrow creation")
|
||||
logger.info("✓ Job completion and payment release")
|
||||
logger.info("✓ Job failure and payment refund")
|
||||
logger.info("\nPayment integration is working correctly!")
|
||||
|
||||
async def main():
|
||||
"""Run the payment integration test"""
|
||||
test = PaymentIntegrationTest()
|
||||
|
||||
try:
|
||||
await test.test_complete_payment_flow()
|
||||
except Exception as e:
|
||||
logger.error(f"Test failed: {e}")
|
||||
raise
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
329
tests/testing/test_payment_local.py
Executable file
329
tests/testing/test_payment_local.py
Executable file
@@ -0,0 +1,329 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test script for AITBC Payment Integration (Localhost)
|
||||
Tests job creation with payments, escrow, release, and refund flows
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import httpx
|
||||
import json
|
||||
import logging
|
||||
from datetime import datetime
|
||||
from typing import Dict, Any
|
||||
|
||||
# Configure logging
|
||||
logging.basicConfig(level=logging.INFO)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Configuration - Using localhost as we're testing from the server
|
||||
COORDINATOR_URL = "http://127.0.0.1:8000/v1"
|
||||
CLIENT_KEY = "${CLIENT_API_KEY}"
|
||||
MINER_KEY = "${MINER_API_KEY}"
|
||||
|
||||
class PaymentIntegrationTest:
|
||||
def __init__(self):
|
||||
self.client = httpx.Client(timeout=30.0)
|
||||
self.job_id = None
|
||||
self.payment_id = None
|
||||
|
||||
async def test_complete_payment_flow(self):
|
||||
"""Test the complete payment flow from job creation to payment release"""
|
||||
|
||||
logger.info("=== Starting AITBC Payment Integration Test (Localhost) ===")
|
||||
|
||||
# Step 1: Check coordinator health
|
||||
await self.check_health()
|
||||
|
||||
# Step 2: Submit a job with payment
|
||||
await self.submit_job_with_payment()
|
||||
|
||||
# Step 3: Check job status and payment
|
||||
await self.check_job_and_payment_status()
|
||||
|
||||
# Step 4: Simulate job completion by miner
|
||||
await self.complete_job()
|
||||
|
||||
# Step 5: Verify payment was released
|
||||
await self.verify_payment_release()
|
||||
|
||||
# Step 6: Test refund flow with a new job
|
||||
await self.test_refund_flow()
|
||||
|
||||
logger.info("=== Payment Integration Test Complete ===")
|
||||
|
||||
async def check_health(self):
|
||||
"""Check if coordinator API is healthy"""
|
||||
logger.info("Step 1: Checking coordinator health...")
|
||||
|
||||
response = self.client.get(f"{COORDINATOR_URL}/health")
|
||||
|
||||
if response.status_code == 200:
|
||||
logger.info(f"✓ Coordinator healthy: {response.json()}")
|
||||
else:
|
||||
raise Exception(f"Coordinator health check failed: {response.status_code}")
|
||||
|
||||
async def submit_job_with_payment(self):
|
||||
"""Submit a job with AITBC token payment"""
|
||||
logger.info("Step 2: Submitting job with payment...")
|
||||
|
||||
job_data = {
|
||||
"payload": {
|
||||
"service_type": "llm",
|
||||
"model": "llama3.2",
|
||||
"prompt": "What is AITBC?",
|
||||
"max_tokens": 100
|
||||
},
|
||||
"constraints": {},
|
||||
"payment_amount": 1.0,
|
||||
"payment_currency": "AITBC",
|
||||
"escrow_timeout_seconds": 3600
|
||||
}
|
||||
|
||||
headers = {"X-Api-Key": CLIENT_KEY}
|
||||
|
||||
response = self.client.post(
|
||||
f"{COORDINATOR_URL}/jobs",
|
||||
json=job_data,
|
||||
headers=headers
|
||||
)
|
||||
|
||||
if response.status_code == 201:
|
||||
job = response.json()
|
||||
self.job_id = job["job_id"]
|
||||
logger.info(f"✓ Job created with ID: {self.job_id}")
|
||||
logger.info(f" Payment status: {job.get('payment_status', 'N/A')}")
|
||||
else:
|
||||
logger.error(f"Failed to create job: {response.status_code}")
|
||||
logger.error(f"Response: {response.text}")
|
||||
raise Exception(f"Failed to create job: {response.status_code}")
|
||||
|
||||
async def check_job_and_payment_status(self):
|
||||
"""Check job status and payment details"""
|
||||
logger.info("Step 3: Checking job and payment status...")
|
||||
|
||||
headers = {"X-Api-Key": CLIENT_KEY}
|
||||
|
||||
# Get job status
|
||||
response = self.client.get(
|
||||
f"{COORDINATOR_URL}/jobs/{self.job_id}",
|
||||
headers=headers
|
||||
)
|
||||
|
||||
if response.status_code == 200:
|
||||
job = response.json()
|
||||
logger.info(f"✓ Job status: {job['state']}")
|
||||
logger.info(f" Payment ID: {job.get('payment_id', 'N/A')}")
|
||||
logger.info(f" Payment status: {job.get('payment_status', 'N/A')}")
|
||||
|
||||
self.payment_id = job.get('payment_id')
|
||||
|
||||
# Get payment details if payment_id exists
|
||||
if self.payment_id:
|
||||
payment_response = self.client.get(
|
||||
f"{COORDINATOR_URL}/payments/{self.payment_id}",
|
||||
headers=headers
|
||||
)
|
||||
|
||||
if payment_response.status_code == 200:
|
||||
payment = payment_response.json()
|
||||
logger.info(f"✓ Payment details:")
|
||||
logger.info(f" Amount: {payment['amount']} {payment['currency']}")
|
||||
logger.info(f" Status: {payment['status']}")
|
||||
logger.info(f" Method: {payment['payment_method']}")
|
||||
else:
|
||||
logger.warning(f"Could not fetch payment details: {payment_response.status_code}")
|
||||
else:
|
||||
raise Exception(f"Failed to get job status: {response.status_code}")
|
||||
|
||||
async def complete_job(self):
|
||||
"""Simulate miner completing the job"""
|
||||
logger.info("Step 4: Simulating job completion...")
|
||||
|
||||
# First, poll for the job as miner (with retry for 204)
|
||||
headers = {"X-Api-Key": MINER_KEY}
|
||||
|
||||
poll_data = None
|
||||
for attempt in range(5):
|
||||
poll_response = self.client.post(
|
||||
f"{COORDINATOR_URL}/miners/poll",
|
||||
json={"capabilities": {"llm": True}},
|
||||
headers=headers
|
||||
)
|
||||
|
||||
if poll_response.status_code == 200:
|
||||
poll_data = poll_response.json()
|
||||
break
|
||||
elif poll_response.status_code == 204:
|
||||
logger.info(f" No job available yet, retrying... ({attempt + 1}/5)")
|
||||
await asyncio.sleep(1)
|
||||
else:
|
||||
raise Exception(f"Failed to poll for job: {poll_response.status_code}")
|
||||
|
||||
if poll_data and poll_data.get("job_id") == self.job_id:
|
||||
logger.info(f"✓ Miner received job: {self.job_id}")
|
||||
|
||||
# Submit job result
|
||||
result_data = {
|
||||
"result": {
|
||||
"text": "AITBC is a decentralized AI computing marketplace that uses blockchain for payments and zero-knowledge proofs for privacy.",
|
||||
"model": "llama3.2",
|
||||
"tokens_used": 42
|
||||
},
|
||||
"metrics": {
|
||||
"duration_ms": 2500,
|
||||
"tokens_used": 42,
|
||||
"gpu_seconds": 0.5
|
||||
}
|
||||
}
|
||||
|
||||
submit_response = self.client.post(
|
||||
f"{COORDINATOR_URL}/miners/{self.job_id}/result",
|
||||
json=result_data,
|
||||
headers=headers
|
||||
)
|
||||
|
||||
if submit_response.status_code == 200:
|
||||
logger.info("✓ Job result submitted successfully")
|
||||
logger.info(f" Receipt: {submit_response.json().get('receipt', {}).get('receipt_id', 'N/A')}")
|
||||
else:
|
||||
raise Exception(f"Failed to submit result: {submit_response.status_code}")
|
||||
elif poll_data:
|
||||
logger.warning(f"Miner received different job: {poll_data.get('job_id')}")
|
||||
else:
|
||||
raise Exception("No job received after 5 retries")
|
||||
|
||||
async def verify_payment_release(self):
|
||||
"""Verify that payment was released after job completion"""
|
||||
logger.info("Step 5: Verifying payment release...")
|
||||
|
||||
# Wait a moment for payment processing
|
||||
await asyncio.sleep(2)
|
||||
|
||||
headers = {"X-Api-Key": CLIENT_KEY}
|
||||
|
||||
# Check updated job status
|
||||
response = self.client.get(
|
||||
f"{COORDINATOR_URL}/jobs/{self.job_id}",
|
||||
headers=headers
|
||||
)
|
||||
|
||||
if response.status_code == 200:
|
||||
job = response.json()
|
||||
logger.info(f"✓ Final job status: {job['state']}")
|
||||
logger.info(f" Final payment status: {job.get('payment_status', 'N/A')}")
|
||||
|
||||
# Get payment receipt
|
||||
if self.payment_id:
|
||||
receipt_response = self.client.get(
|
||||
f"{COORDINATOR_URL}/payments/{self.payment_id}/receipt",
|
||||
headers=headers
|
||||
)
|
||||
|
||||
if receipt_response.status_code == 200:
|
||||
receipt = receipt_response.json()
|
||||
logger.info(f"✓ Payment receipt:")
|
||||
logger.info(f" Status: {receipt['status']}")
|
||||
logger.info(f" Verified at: {receipt.get('verified_at', 'N/A')}")
|
||||
logger.info(f" Transaction hash: {receipt.get('transaction_hash', 'N/A')}")
|
||||
else:
|
||||
logger.warning(f"Could not fetch payment receipt: {receipt_response.status_code}")
|
||||
else:
|
||||
raise Exception(f"Failed to verify payment release: {response.status_code}")
|
||||
|
||||
async def test_refund_flow(self):
|
||||
"""Test payment refund for failed jobs"""
|
||||
logger.info("Step 6: Testing refund flow...")
|
||||
|
||||
# Create a new job that will fail
|
||||
job_data = {
|
||||
"payload": {
|
||||
"service_type": "llm",
|
||||
"model": "nonexistent_model",
|
||||
"prompt": "This should fail"
|
||||
},
|
||||
"payment_amount": 0.5,
|
||||
"payment_currency": "AITBC"
|
||||
}
|
||||
|
||||
headers = {"X-Api-Key": CLIENT_KEY}
|
||||
|
||||
response = self.client.post(
|
||||
f"{COORDINATOR_URL}/jobs",
|
||||
json=job_data,
|
||||
headers=headers
|
||||
)
|
||||
|
||||
if response.status_code == 201:
|
||||
fail_job = response.json()
|
||||
fail_job_id = fail_job["job_id"]
|
||||
fail_payment_id = fail_job.get("payment_id")
|
||||
|
||||
logger.info(f"✓ Created test job for refund: {fail_job_id}")
|
||||
|
||||
# Simulate job failure
|
||||
fail_headers = {"X-Api-Key": MINER_KEY}
|
||||
|
||||
# Poll for the job
|
||||
poll_response = self.client.post(
|
||||
f"{COORDINATOR_URL}/miners/poll",
|
||||
json={"capabilities": ["llm"]},
|
||||
headers=fail_headers
|
||||
)
|
||||
|
||||
if poll_response.status_code == 200:
|
||||
poll_data = poll_response.json()
|
||||
if poll_data.get("job_id") == fail_job_id:
|
||||
# Submit failure
|
||||
fail_data = {
|
||||
"error_code": "MODEL_NOT_FOUND",
|
||||
"error_message": "The specified model does not exist"
|
||||
}
|
||||
|
||||
fail_response = self.client.post(
|
||||
f"{COORDINATOR_URL}/miners/{fail_job_id}/fail",
|
||||
json=fail_data,
|
||||
headers=fail_headers
|
||||
)
|
||||
|
||||
if fail_response.status_code == 200:
|
||||
logger.info("✓ Job failure submitted")
|
||||
|
||||
# Wait for refund processing
|
||||
await asyncio.sleep(2)
|
||||
|
||||
# Check refund status
|
||||
if fail_payment_id:
|
||||
payment_response = self.client.get(
|
||||
f"{COORDINATOR_URL}/payments/{fail_payment_id}",
|
||||
headers=headers
|
||||
)
|
||||
|
||||
if payment_response.status_code == 200:
|
||||
payment = payment_response.json()
|
||||
logger.info(f"✓ Payment refunded:")
|
||||
logger.info(f" Status: {payment['status']}")
|
||||
logger.info(f" Refunded at: {payment.get('refunded_at', 'N/A')}")
|
||||
else:
|
||||
logger.warning(f"Could not verify refund: {payment_response.status_code}")
|
||||
else:
|
||||
logger.warning(f"Failed to submit job failure: {fail_response.status_code}")
|
||||
|
||||
logger.info("\n=== Test Summary ===")
|
||||
logger.info("✓ Job creation with payment")
|
||||
logger.info("✓ Payment escrow creation")
|
||||
logger.info("✓ Job completion and payment release")
|
||||
logger.info("✓ Job failure and payment refund")
|
||||
logger.info("\nPayment integration is working correctly!")
|
||||
|
||||
async def main():
|
||||
"""Run the payment integration test"""
|
||||
test = PaymentIntegrationTest()
|
||||
|
||||
try:
|
||||
await test.test_complete_payment_flow()
|
||||
except Exception as e:
|
||||
logger.error(f"Test failed: {e}")
|
||||
raise
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
569
tests/testing/test_performance.py
Executable file
569
tests/testing/test_performance.py
Executable file
@@ -0,0 +1,569 @@
|
||||
"""
|
||||
Performance Tests for AITBC Chain Management and Analytics
|
||||
Tests system performance under various load conditions
|
||||
"""
|
||||
|
||||
import pytest
|
||||
import asyncio
|
||||
import json
|
||||
import time
|
||||
import threading
|
||||
import statistics
|
||||
from datetime import datetime, timedelta
|
||||
from pathlib import Path
|
||||
import subprocess
|
||||
import requests
|
||||
from concurrent.futures import ThreadPoolExecutor, as_completed
|
||||
from typing import Dict, Any, List, Tuple
|
||||
import psutil
|
||||
import memory_profiler
|
||||
|
||||
class TestPerformance:
|
||||
"""Performance testing suite for AITBC components"""
|
||||
|
||||
@pytest.fixture(scope="class")
|
||||
def performance_config(self):
|
||||
"""Performance test configuration"""
|
||||
return {
|
||||
"base_url": "http://localhost",
|
||||
"ports": {
|
||||
"coordinator": 8001,
|
||||
"blockchain": 8007,
|
||||
"consensus": 8002,
|
||||
"network": 8008,
|
||||
"explorer": 8016,
|
||||
"wallet_daemon": 8003,
|
||||
"exchange": 8010,
|
||||
"oracle": 8011,
|
||||
"trading": 8012,
|
||||
"compliance": 8015,
|
||||
"plugin_registry": 8013,
|
||||
"plugin_marketplace": 8014,
|
||||
"global_infrastructure": 8017,
|
||||
"ai_agents": 8018,
|
||||
"load_balancer": 8019
|
||||
},
|
||||
"load_test_config": {
|
||||
"concurrent_users": 10,
|
||||
"requests_per_user": 100,
|
||||
"duration_seconds": 60,
|
||||
"ramp_up_time": 10
|
||||
},
|
||||
"performance_thresholds": {
|
||||
"response_time_p95": 2000, # 95th percentile < 2 seconds
|
||||
"response_time_p99": 5000, # 99th percentile < 5 seconds
|
||||
"error_rate": 0.01, # < 1% error rate
|
||||
"throughput_min": 50, # Minimum 50 requests/second
|
||||
"cpu_usage_max": 0.80, # < 80% CPU usage
|
||||
"memory_usage_max": 0.85 # < 85% memory usage
|
||||
}
|
||||
}
|
||||
|
||||
@pytest.fixture(scope="class")
|
||||
def baseline_metrics(self, performance_config):
|
||||
"""Capture baseline system metrics"""
|
||||
return {
|
||||
"cpu_percent": psutil.cpu_percent(interval=1),
|
||||
"memory_percent": psutil.virtual_memory().percent,
|
||||
"timestamp": datetime.utcnow().isoformat()
|
||||
}
|
||||
|
||||
def test_cli_performance(self, performance_config):
|
||||
"""Test CLI command performance"""
|
||||
cli_commands = [
|
||||
["--help"],
|
||||
["wallet", "--help"],
|
||||
["blockchain", "--help"],
|
||||
["multisig", "--help"],
|
||||
["genesis-protection", "--help"],
|
||||
["transfer-control", "--help"],
|
||||
["compliance", "--help"],
|
||||
["exchange", "--help"],
|
||||
["oracle", "--help"],
|
||||
["market-maker", "--help"]
|
||||
]
|
||||
|
||||
response_times = []
|
||||
|
||||
for command in cli_commands:
|
||||
start_time = time.time()
|
||||
|
||||
result = subprocess.run(
|
||||
["python", "-m", "aitbc_cli.main"] + command,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
cwd="/home/oib/windsurf/aitbc/cli"
|
||||
)
|
||||
|
||||
end_time = time.time()
|
||||
response_time = (end_time - start_time) * 1000 # Convert to milliseconds
|
||||
|
||||
assert result.returncode == 0, f"CLI command failed: {' '.join(command)}"
|
||||
assert response_time < 5000, f"CLI command too slow: {response_time:.2f}ms"
|
||||
|
||||
response_times.append(response_time)
|
||||
|
||||
# Calculate performance statistics
|
||||
avg_response_time = statistics.mean(response_times)
|
||||
p95_response_time = statistics.quantiles(response_times, n=20)[18] # 95th percentile
|
||||
max_response_time = max(response_times)
|
||||
|
||||
# Performance assertions
|
||||
assert avg_response_time < 1000, f"Average CLI response time too high: {avg_response_time:.2f}ms"
|
||||
assert p95_response_time < 3000, f"95th percentile CLI response time too high: {p95_response_time:.2f}ms"
|
||||
assert max_response_time < 10000, f"Maximum CLI response time too high: {max_response_time:.2f}ms"
|
||||
|
||||
print(f"CLI Performance Results:")
|
||||
print(f" Average: {avg_response_time:.2f}ms")
|
||||
print(f" 95th percentile: {p95_response_time:.2f}ms")
|
||||
print(f" Maximum: {max_response_time:.2f}ms")
|
||||
|
||||
def test_concurrent_cli_operations(self, performance_config):
|
||||
"""Test concurrent CLI operations"""
|
||||
def run_cli_command(command):
|
||||
start_time = time.time()
|
||||
result = subprocess.run(
|
||||
["python", "-m", "aitbc_cli.main"] + command,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
cwd="/home/oib/windsurf/aitbc/cli"
|
||||
)
|
||||
end_time = time.time()
|
||||
return {
|
||||
"command": command,
|
||||
"success": result.returncode == 0,
|
||||
"response_time": (end_time - start_time) * 1000,
|
||||
"output_length": len(result.stdout)
|
||||
}
|
||||
|
||||
# Test concurrent operations
|
||||
commands_to_test = [
|
||||
["wallet", "--help"],
|
||||
["blockchain", "--help"],
|
||||
["multisig", "--help"],
|
||||
["compliance", "--help"],
|
||||
["exchange", "--help"]
|
||||
]
|
||||
|
||||
with ThreadPoolExecutor(max_workers=10) as executor:
|
||||
# Submit multiple concurrent requests
|
||||
futures = []
|
||||
for _ in range(20): # 20 concurrent operations
|
||||
for command in commands_to_test:
|
||||
future = executor.submit(run_cli_command, command)
|
||||
futures.append(future)
|
||||
|
||||
# Collect results
|
||||
results = []
|
||||
for future in as_completed(futures):
|
||||
result = future.result()
|
||||
results.append(result)
|
||||
|
||||
# Analyze results
|
||||
successful_operations = [r for r in results if r["success"]]
|
||||
response_times = [r["response_time"] for r in successful_operations]
|
||||
|
||||
success_rate = len(successful_operations) / len(results)
|
||||
avg_response_time = statistics.mean(response_times) if response_times else 0
|
||||
p95_response_time = statistics.quantiles(response_times, n=20)[18] if len(response_times) > 20 else max(response_times) if response_times else 0
|
||||
|
||||
# Performance assertions
|
||||
assert success_rate >= 0.95, f"Low success rate: {success_rate:.2%}"
|
||||
assert avg_response_time < 2000, f"Average response time too high: {avg_response_time:.2f}ms"
|
||||
assert p95_response_time < 5000, f"95th percentile response time too high: {p95_response_time:.2f}ms"
|
||||
|
||||
print(f"Concurrent CLI Operations Results:")
|
||||
print(f" Success rate: {success_rate:.2%}")
|
||||
print(f" Average response time: {avg_response_time:.2f}ms")
|
||||
print(f" 95th percentile: {p95_response_time:.2f}ms")
|
||||
print(f" Total operations: {len(results)}")
|
||||
|
||||
def test_memory_usage_cli(self, performance_config):
|
||||
"""Test memory usage during CLI operations"""
|
||||
@memory_profiler.profile
|
||||
def run_memory_intensive_cli_operations():
|
||||
commands = [
|
||||
["wallet", "--help"],
|
||||
["blockchain", "--help"],
|
||||
["multisig", "--help"],
|
||||
["genesis-protection", "--help"],
|
||||
["transfer-control", "--help"],
|
||||
["compliance", "--help"],
|
||||
["exchange", "--help"],
|
||||
["oracle", "--help"],
|
||||
["market-maker", "--help"]
|
||||
]
|
||||
|
||||
for _ in range(10): # Run commands multiple times
|
||||
for command in commands:
|
||||
subprocess.run(
|
||||
["python", "-m", "aitbc_cli.main"] + command,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
cwd="/home/oib/windsurf/aitbc/cli"
|
||||
)
|
||||
|
||||
# Capture memory before test
|
||||
memory_before = psutil.virtual_memory().percent
|
||||
|
||||
# Run memory-intensive operations
|
||||
run_memory_intensive_cli_operations()
|
||||
|
||||
# Capture memory after test
|
||||
memory_after = psutil.virtual_memory().percent
|
||||
memory_increase = memory_after - memory_before
|
||||
|
||||
# Memory assertion
|
||||
assert memory_increase < 20, f"Memory usage increased too much: {memory_increase:.1f}%"
|
||||
|
||||
print(f"Memory Usage Results:")
|
||||
print(f" Memory before: {memory_before:.1f}%")
|
||||
print(f" Memory after: {memory_after:.1f}%")
|
||||
print(f" Memory increase: {memory_increase:.1f}%")
|
||||
|
||||
def test_load_balancing_performance(self, performance_config):
|
||||
"""Test load balancer performance under load"""
|
||||
def make_load_balancer_request():
|
||||
try:
|
||||
start_time = time.time()
|
||||
response = requests.get(
|
||||
f"{performance_config['base_url']}:{performance_config['ports']['load_balancer']}/health",
|
||||
timeout=5
|
||||
)
|
||||
end_time = time.time()
|
||||
|
||||
return {
|
||||
"success": response.status_code == 200,
|
||||
"response_time": (end_time - start_time) * 1000,
|
||||
"status_code": response.status_code
|
||||
}
|
||||
except Exception as e:
|
||||
return {
|
||||
"success": False,
|
||||
"response_time": 5000, # Timeout
|
||||
"error": str(e)
|
||||
}
|
||||
|
||||
# Test with concurrent requests
|
||||
with ThreadPoolExecutor(max_workers=20) as executor:
|
||||
futures = [executor.submit(make_load_balancer_request) for _ in range(100)]
|
||||
results = [future.result() for future in as_completed(futures)]
|
||||
|
||||
# Analyze results
|
||||
successful_requests = [r for r in results if r["success"]]
|
||||
response_times = [r["response_time"] for r in successful_requests]
|
||||
|
||||
if response_times:
|
||||
success_rate = len(successful_requests) / len(results)
|
||||
avg_response_time = statistics.mean(response_times)
|
||||
p95_response_time = statistics.quantiles(response_times, n=20)[18] if len(response_times) > 20 else max(response_times)
|
||||
throughput = len(successful_requests) / 10 # requests per second
|
||||
|
||||
# Performance assertions
|
||||
assert success_rate >= 0.90, f"Low success rate: {success_rate:.2%}"
|
||||
assert avg_response_time < 1000, f"Average response time too high: {avg_response_time:.2f}ms"
|
||||
assert throughput >= 10, f"Throughput too low: {throughput:.2f} req/s"
|
||||
|
||||
print(f"Load Balancer Performance Results:")
|
||||
print(f" Success rate: {success_rate:.2%}")
|
||||
print(f" Average response time: {avg_response_time:.2f}ms")
|
||||
print(f" 95th percentile: {p95_response_time:.2f}ms")
|
||||
print(f" Throughput: {throughput:.2f} req/s")
|
||||
|
||||
def test_global_infrastructure_performance(self, performance_config):
|
||||
"""Test global infrastructure performance"""
|
||||
def test_service_performance(service_name, port):
|
||||
try:
|
||||
start_time = time.time()
|
||||
response = requests.get(f"{performance_config['base_url']}:{port}/health", timeout=5)
|
||||
end_time = time.time()
|
||||
|
||||
return {
|
||||
"service": service_name,
|
||||
"success": response.status_code == 200,
|
||||
"response_time": (end_time - start_time) * 1000,
|
||||
"status_code": response.status_code
|
||||
}
|
||||
except Exception as e:
|
||||
return {
|
||||
"service": service_name,
|
||||
"success": False,
|
||||
"response_time": 5000,
|
||||
"error": str(e)
|
||||
}
|
||||
|
||||
# Test all global services
|
||||
global_services = {
|
||||
"global_infrastructure": performance_config["ports"]["global_infrastructure"],
|
||||
"ai_agents": performance_config["ports"]["ai_agents"],
|
||||
"load_balancer": performance_config["ports"]["load_balancer"]
|
||||
}
|
||||
|
||||
with ThreadPoolExecutor(max_workers=5) as executor:
|
||||
futures = [
|
||||
executor.submit(test_service_performance, service_name, port)
|
||||
for service_name, port in global_services.items()
|
||||
]
|
||||
results = [future.result() for future in as_completed(futures)]
|
||||
|
||||
# Analyze results
|
||||
successful_services = [r for r in results if r["success"]]
|
||||
response_times = [r["response_time"] for r in successful_services]
|
||||
|
||||
if response_times:
|
||||
avg_response_time = statistics.mean(response_times)
|
||||
max_response_time = max(response_times)
|
||||
|
||||
# Performance assertions
|
||||
assert len(successful_services) >= 2, f"Too few successful services: {len(successful_services)}"
|
||||
assert avg_response_time < 2000, f"Average response time too high: {avg_response_time:.2f}ms"
|
||||
assert max_response_time < 5000, f"Maximum response time too high: {max_response_time:.2f}ms"
|
||||
|
||||
print(f"Global Infrastructure Performance Results:")
|
||||
print(f" Successful services: {len(successful_services)}/{len(results)}")
|
||||
print(f" Average response time: {avg_response_time:.2f}ms")
|
||||
print(f" Maximum response time: {max_response_time:.2f}ms")
|
||||
|
||||
def test_ai_agent_communication_performance(self, performance_config):
|
||||
"""Test AI agent communication performance"""
|
||||
def test_agent_communication():
|
||||
try:
|
||||
start_time = time.time()
|
||||
response = requests.get(
|
||||
f"{performance_config['base_url']}:{performance_config['ports']['ai_agents']}/api/v1/network/dashboard",
|
||||
timeout=5
|
||||
)
|
||||
end_time = time.time()
|
||||
|
||||
return {
|
||||
"success": response.status_code == 200,
|
||||
"response_time": (end_time - start_time) * 1000,
|
||||
"data_size": len(response.content)
|
||||
}
|
||||
except Exception as e:
|
||||
return {
|
||||
"success": False,
|
||||
"response_time": 5000,
|
||||
"error": str(e)
|
||||
}
|
||||
|
||||
# Test concurrent agent communications
|
||||
with ThreadPoolExecutor(max_workers=10) as executor:
|
||||
futures = [executor.submit(test_agent_communication) for _ in range(50)]
|
||||
results = [future.result() for future in as_completed(futures)]
|
||||
|
||||
# Analyze results
|
||||
successful_requests = [r for r in results if r["success"]]
|
||||
response_times = [r["response_time"] for r in successful_requests]
|
||||
|
||||
if response_times:
|
||||
success_rate = len(successful_requests) / len(results)
|
||||
avg_response_time = statistics.mean(response_times)
|
||||
p95_response_time = statistics.quantiles(response_times, n=20)[18] if len(response_times) > 20 else max(response_times)
|
||||
|
||||
# Performance assertions
|
||||
assert success_rate >= 0.80, f"Low success rate: {success_rate:.2%}"
|
||||
assert avg_response_time < 3000, f"Average response time too high: {avg_response_time:.2f}ms"
|
||||
assert p95_response_time < 8000, f"95th percentile response time too high: {p95_response_time:.2f}ms"
|
||||
|
||||
print(f"AI Agent Communication Performance Results:")
|
||||
print(f" Success rate: {success_rate:.2%}")
|
||||
print(f" Average response time: {avg_response_time:.2f}ms")
|
||||
print(f" 95th percentile: {p95_response_time:.2f}ms")
|
||||
print(f" Total requests: {len(results)}")
|
||||
|
||||
def test_plugin_ecosystem_performance(self, performance_config):
|
||||
"""Test plugin ecosystem performance"""
|
||||
plugin_services = {
|
||||
"plugin_registry": performance_config["ports"]["plugin_registry"],
|
||||
"plugin_marketplace": performance_config["ports"]["plugin_marketplace"],
|
||||
"plugin_analytics": performance_config["ports"]["plugin_analytics"]
|
||||
}
|
||||
|
||||
def test_plugin_service(service_name, port):
|
||||
try:
|
||||
start_time = time.time()
|
||||
response = requests.get(f"{performance_config['base_url']}:{port}/health", timeout=5)
|
||||
end_time = time.time()
|
||||
|
||||
return {
|
||||
"service": service_name,
|
||||
"success": response.status_code == 200,
|
||||
"response_time": (end_time - start_time) * 1000
|
||||
}
|
||||
except Exception as e:
|
||||
return {
|
||||
"service": service_name,
|
||||
"success": False,
|
||||
"response_time": 5000,
|
||||
"error": str(e)
|
||||
}
|
||||
|
||||
with ThreadPoolExecutor(max_workers=3) as executor:
|
||||
futures = [
|
||||
executor.submit(test_plugin_service, service_name, port)
|
||||
for service_name, port in plugin_services.items()
|
||||
]
|
||||
results = [future.result() for future in as_completed(futures)]
|
||||
|
||||
# Analyze results
|
||||
successful_services = [r for r in results if r["success"]]
|
||||
response_times = [r["response_time"] for r in successful_services]
|
||||
|
||||
if response_times:
|
||||
avg_response_time = statistics.mean(response_times)
|
||||
|
||||
# Performance assertions
|
||||
assert len(successful_services) >= 1, f"No plugin services responding"
|
||||
assert avg_response_time < 2000, f"Average response time too high: {avg_response_time:.2f}ms"
|
||||
|
||||
print(f"Plugin Ecosystem Performance Results:")
|
||||
print(f" Successful services: {len(successful_services)}/{len(results)}")
|
||||
print(f" Average response time: {avg_response_time:.2f}ms")
|
||||
|
||||
def test_system_resource_usage(self, performance_config, baseline_metrics):
|
||||
"""Test system resource usage during operations"""
|
||||
# Monitor system resources during intensive operations
|
||||
resource_samples = []
|
||||
|
||||
def monitor_resources():
|
||||
for _ in range(30): # Monitor for 30 seconds
|
||||
cpu_percent = psutil.cpu_percent(interval=1)
|
||||
memory_percent = psutil.virtual_memory().percent
|
||||
|
||||
resource_samples.append({
|
||||
"timestamp": datetime.utcnow().isoformat(),
|
||||
"cpu_percent": cpu_percent,
|
||||
"memory_percent": memory_percent
|
||||
})
|
||||
|
||||
def run_intensive_operations():
|
||||
# Run intensive CLI operations
|
||||
commands = [
|
||||
["wallet", "--help"],
|
||||
["blockchain", "--help"],
|
||||
["multisig", "--help"],
|
||||
["compliance", "--help"]
|
||||
]
|
||||
|
||||
for _ in range(20):
|
||||
for command in commands:
|
||||
subprocess.run(
|
||||
["python", "-m", "aitbc_cli.main"] + command,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
cwd="/home/oib/windsurf/aitbc/cli"
|
||||
)
|
||||
|
||||
# Run monitoring and operations concurrently
|
||||
monitor_thread = threading.Thread(target=monitor_resources)
|
||||
operation_thread = threading.Thread(target=run_intensive_operations)
|
||||
|
||||
monitor_thread.start()
|
||||
operation_thread.start()
|
||||
|
||||
monitor_thread.join()
|
||||
operation_thread.join()
|
||||
|
||||
# Analyze resource usage
|
||||
cpu_values = [sample["cpu_percent"] for sample in resource_samples]
|
||||
memory_values = [sample["memory_percent"] for sample in resource_samples]
|
||||
|
||||
avg_cpu = statistics.mean(cpu_values)
|
||||
max_cpu = max(cpu_values)
|
||||
avg_memory = statistics.mean(memory_values)
|
||||
max_memory = max(memory_values)
|
||||
|
||||
# Resource assertions
|
||||
assert avg_cpu < 70, f"Average CPU usage too high: {avg_cpu:.1f}%"
|
||||
assert max_cpu < 90, f"Maximum CPU usage too high: {max_cpu:.1f}%"
|
||||
assert avg_memory < 80, f"Average memory usage too high: {avg_memory:.1f}%"
|
||||
assert max_memory < 95, f"Maximum memory usage too high: {max_memory:.1f}%"
|
||||
|
||||
print(f"System Resource Usage Results:")
|
||||
print(f" Average CPU: {avg_cpu:.1f}% (max: {max_cpu:.1f}%)")
|
||||
print(f" Average Memory: {avg_memory:.1f}% (max: {max_memory:.1f}%)")
|
||||
print(f" Baseline CPU: {baseline_metrics['cpu_percent']:.1f}%")
|
||||
print(f" Baseline Memory: {baseline_metrics['memory_percent']:.1f}%")
|
||||
|
||||
def test_stress_test_cli(self, performance_config):
|
||||
"""Stress test CLI with high load"""
|
||||
def stress_cli_worker(worker_id):
|
||||
results = []
|
||||
commands = [
|
||||
["wallet", "--help"],
|
||||
["blockchain", "--help"],
|
||||
["multisig", "--help"],
|
||||
["compliance", "--help"]
|
||||
]
|
||||
|
||||
for i in range(50): # 50 operations per worker
|
||||
command = commands[i % len(commands)]
|
||||
start_time = time.time()
|
||||
|
||||
result = subprocess.run(
|
||||
["python", "-m", "aitbc_cli.main"] + command,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
cwd="/home/oib/windsurf/aitbc/cli"
|
||||
)
|
||||
|
||||
end_time = time.time()
|
||||
|
||||
results.append({
|
||||
"worker_id": worker_id,
|
||||
"operation_id": i,
|
||||
"success": result.returncode == 0,
|
||||
"response_time": (end_time - start_time) * 1000
|
||||
})
|
||||
|
||||
return results
|
||||
|
||||
# Run stress test with multiple workers
|
||||
with ThreadPoolExecutor(max_workers=5) as executor:
|
||||
futures = [executor.submit(stress_cli_worker, i) for i in range(5)]
|
||||
all_results = []
|
||||
|
||||
for future in as_completed(futures):
|
||||
worker_results = future.result()
|
||||
all_results.extend(worker_results)
|
||||
|
||||
# Analyze stress test results
|
||||
successful_operations = [r for r in all_results if r["success"]]
|
||||
response_times = [r["response_time"] for r in successful_operations]
|
||||
|
||||
success_rate = len(successful_operations) / len(all_results)
|
||||
avg_response_time = statistics.mean(response_times) if response_times else 0
|
||||
p95_response_time = statistics.quantiles(response_times, n=20)[18] if len(response_times) > 20 else max(response_times) if response_times else 0
|
||||
total_throughput = len(successful_operations) / 30 # operations per second
|
||||
|
||||
# Stress test assertions (more lenient thresholds)
|
||||
assert success_rate >= 0.90, f"Low success rate under stress: {success_rate:.2%}"
|
||||
assert avg_response_time < 5000, f"Average response time too high under stress: {avg_response_time:.2f}ms"
|
||||
assert total_throughput >= 5, f"Throughput too low under stress: {total_throughput:.2f} ops/s"
|
||||
|
||||
print(f"CLI Stress Test Results:")
|
||||
print(f" Total operations: {len(all_results)}")
|
||||
print(f" Success rate: {success_rate:.2%}")
|
||||
print(f" Average response time: {avg_response_time:.2f}ms")
|
||||
print(f" 95th percentile: {p95_response_time:.2f}ms")
|
||||
print(f" Throughput: {total_throughput:.2f} ops/s")
|
||||
|
||||
class TestLoadTesting:
|
||||
"""Load testing for high-volume scenarios"""
|
||||
|
||||
def test_load_test_blockchain_operations(self, performance_config):
|
||||
"""Load test blockchain operations"""
|
||||
# This would test blockchain operations under high load
|
||||
# Implementation depends on blockchain service availability
|
||||
pass
|
||||
|
||||
def test_load_test_trading_operations(self, performance_config):
|
||||
"""Load test trading operations"""
|
||||
# This would test trading operations under high load
|
||||
# Implementation depends on trading service availability
|
||||
pass
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Run performance tests
|
||||
pytest.main([__file__, "-v", "--tb=short"])
|
||||
512
tests/testing/test_performance_benchmarks.py
Executable file
512
tests/testing/test_performance_benchmarks.py
Executable file
@@ -0,0 +1,512 @@
|
||||
"""
|
||||
Performance Benchmark Tests for AITBC
|
||||
Tests system performance under various loads and conditions
|
||||
"""
|
||||
|
||||
import pytest
|
||||
import time
|
||||
import asyncio
|
||||
import threading
|
||||
from datetime import datetime, timedelta
|
||||
from unittest.mock import Mock, patch
|
||||
from concurrent.futures import ThreadPoolExecutor
|
||||
import statistics
|
||||
|
||||
|
||||
class TestAPIPerformance:
|
||||
"""Test API endpoint performance"""
|
||||
|
||||
def test_response_time_benchmarks(self):
|
||||
"""Test API response time benchmarks"""
|
||||
# Mock API client
|
||||
client = Mock()
|
||||
|
||||
# Simulate different response times
|
||||
response_times = [0.05, 0.08, 0.12, 0.06, 0.09, 0.11, 0.07, 0.10]
|
||||
|
||||
# Calculate performance metrics
|
||||
avg_response_time = statistics.mean(response_times)
|
||||
max_response_time = max(response_times)
|
||||
min_response_time = min(response_times)
|
||||
|
||||
# Performance assertions
|
||||
assert avg_response_time < 0.1 # Average should be under 100ms
|
||||
assert max_response_time < 0.2 # Max should be under 200ms
|
||||
assert min_response_time > 0.01 # Should be reasonable minimum
|
||||
|
||||
# Test performance thresholds
|
||||
performance_thresholds = {
|
||||
'excellent': 0.05, # < 50ms
|
||||
'good': 0.1, # < 100ms
|
||||
'acceptable': 0.2, # < 200ms
|
||||
'poor': 0.5 # > 500ms
|
||||
}
|
||||
|
||||
# Classify performance
|
||||
if avg_response_time < performance_thresholds['excellent']:
|
||||
performance_rating = 'excellent'
|
||||
elif avg_response_time < performance_thresholds['good']:
|
||||
performance_rating = 'good'
|
||||
elif avg_response_time < performance_thresholds['acceptable']:
|
||||
performance_rating = 'acceptable'
|
||||
else:
|
||||
performance_rating = 'poor'
|
||||
|
||||
assert performance_rating in ['excellent', 'good', 'acceptable']
|
||||
|
||||
def test_concurrent_request_handling(self):
|
||||
"""Test handling of concurrent requests"""
|
||||
# Mock API endpoint
|
||||
def mock_api_call(request_id):
|
||||
time.sleep(0.01) # Simulate 10ms processing time
|
||||
return {'request_id': request_id, 'status': 'success'}
|
||||
|
||||
# Test concurrent execution
|
||||
num_requests = 50
|
||||
start_time = time.time()
|
||||
|
||||
with ThreadPoolExecutor(max_workers=10) as executor:
|
||||
futures = [
|
||||
executor.submit(mock_api_call, i)
|
||||
for i in range(num_requests)
|
||||
]
|
||||
results = [future.result() for future in futures]
|
||||
|
||||
end_time = time.time()
|
||||
total_time = end_time - start_time
|
||||
|
||||
# Performance assertions
|
||||
assert len(results) == num_requests
|
||||
assert all(result['status'] == 'success' for result in results)
|
||||
assert total_time < 1.0 # Should complete in under 1 second
|
||||
|
||||
# Calculate throughput
|
||||
throughput = num_requests / total_time
|
||||
assert throughput > 50 # Should handle at least 50 requests per second
|
||||
|
||||
def test_memory_usage_under_load(self):
|
||||
"""Test memory usage under load"""
|
||||
import psutil
|
||||
import os
|
||||
|
||||
# Get initial memory usage
|
||||
process = psutil.Process(os.getpid())
|
||||
initial_memory = process.memory_info().rss / 1024 / 1024 # MB
|
||||
|
||||
# Simulate memory-intensive operations
|
||||
data_store = []
|
||||
for i in range(1000):
|
||||
data_store.append({
|
||||
'id': i,
|
||||
'data': 'x' * 1000, # 1KB per item
|
||||
'timestamp': datetime.utcnow().isoformat()
|
||||
})
|
||||
|
||||
# Get peak memory usage
|
||||
peak_memory = process.memory_info().rss / 1024 / 1024 # MB
|
||||
memory_increase = peak_memory - initial_memory
|
||||
|
||||
# Memory assertions
|
||||
assert memory_increase < 100 # Should not increase by more than 100MB
|
||||
assert len(data_store) == 1000
|
||||
|
||||
# Cleanup
|
||||
del data_store
|
||||
|
||||
|
||||
class TestDatabasePerformance:
|
||||
"""Test database operation performance"""
|
||||
|
||||
def test_query_performance(self):
|
||||
"""Test database query performance"""
|
||||
# Mock database operations
|
||||
def mock_query(query_type):
|
||||
if query_type == 'simple':
|
||||
time.sleep(0.001) # 1ms
|
||||
elif query_type == 'complex':
|
||||
time.sleep(0.01) # 10ms
|
||||
elif query_type == 'aggregate':
|
||||
time.sleep(0.05) # 50ms
|
||||
return {'results': ['data'], 'query_type': query_type}
|
||||
|
||||
# Test different query types
|
||||
query_types = ['simple', 'complex', 'aggregate']
|
||||
query_times = {}
|
||||
|
||||
for query_type in query_types:
|
||||
start_time = time.time()
|
||||
result = mock_query(query_type)
|
||||
end_time = time.time()
|
||||
query_times[query_type] = end_time - start_time
|
||||
|
||||
assert result['query_type'] == query_type
|
||||
|
||||
# Performance assertions
|
||||
assert query_times['simple'] < 0.005 # < 5ms
|
||||
assert query_times['complex'] < 0.02 # < 20ms
|
||||
assert query_times['aggregate'] < 0.1 # < 100ms
|
||||
|
||||
def test_batch_operation_performance(self):
|
||||
"""Test batch operation performance"""
|
||||
# Mock batch insert
|
||||
def mock_batch_insert(items):
|
||||
time.sleep(len(items) * 0.001) # 1ms per item
|
||||
return {'inserted_count': len(items)}
|
||||
|
||||
# Test different batch sizes
|
||||
batch_sizes = [10, 50, 100, 500]
|
||||
performance_results = {}
|
||||
|
||||
for batch_size in batch_sizes:
|
||||
items = [{'id': i, 'data': f'item_{i}'} for i in range(batch_size)]
|
||||
|
||||
start_time = time.time()
|
||||
result = mock_batch_insert(items)
|
||||
end_time = time.time()
|
||||
|
||||
performance_results[batch_size] = {
|
||||
'time': end_time - start_time,
|
||||
'throughput': batch_size / (end_time - start_time)
|
||||
}
|
||||
|
||||
assert result['inserted_count'] == batch_size
|
||||
|
||||
# Performance analysis
|
||||
for batch_size, metrics in performance_results.items():
|
||||
assert metrics['throughput'] > 100 # Should handle at least 100 items/second
|
||||
assert metrics['time'] < 5.0 # Should complete in under 5 seconds
|
||||
|
||||
def test_connection_pool_performance(self):
|
||||
"""Test database connection pool performance"""
|
||||
# Mock connection pool
|
||||
class MockConnectionPool:
|
||||
def __init__(self, max_connections=10):
|
||||
self.max_connections = max_connections
|
||||
self.active_connections = 0
|
||||
self.lock = threading.Lock()
|
||||
|
||||
def get_connection(self):
|
||||
with self.lock:
|
||||
if self.active_connections < self.max_connections:
|
||||
self.active_connections += 1
|
||||
return MockConnection()
|
||||
else:
|
||||
raise Exception("Connection pool exhausted")
|
||||
|
||||
def release_connection(self, conn):
|
||||
with self.lock:
|
||||
self.active_connections -= 1
|
||||
|
||||
class MockConnection:
|
||||
def execute(self, query):
|
||||
time.sleep(0.01) # 10ms query time
|
||||
return {'result': 'success'}
|
||||
|
||||
# Test connection pool under load
|
||||
pool = MockConnectionPool(max_connections=5)
|
||||
|
||||
def worker_task():
|
||||
try:
|
||||
conn = pool.get_connection()
|
||||
result = conn.execute("SELECT * FROM test")
|
||||
pool.release_connection(conn)
|
||||
return result
|
||||
except Exception as e:
|
||||
return {'error': str(e)}
|
||||
|
||||
# Test concurrent access
|
||||
with ThreadPoolExecutor(max_workers=10) as executor:
|
||||
futures = [executor.submit(worker_task) for _ in range(20)]
|
||||
results = [future.result() for future in futures]
|
||||
|
||||
# Analyze results
|
||||
successful_results = [r for r in results if 'error' not in r]
|
||||
error_results = [r for r in results if 'error' in r]
|
||||
|
||||
# Should have some successful and some error results (pool exhaustion)
|
||||
assert len(successful_results) > 0
|
||||
assert len(error_results) > 0
|
||||
assert len(successful_results) + len(error_results) == 20
|
||||
|
||||
|
||||
class TestBlockchainPerformance:
|
||||
"""Test blockchain operation performance"""
|
||||
|
||||
def test_transaction_processing_speed(self):
|
||||
"""Test transaction processing speed"""
|
||||
# Mock transaction processing
|
||||
def mock_process_transaction(tx):
|
||||
processing_time = 0.1 + (len(tx['data']) * 0.001) # Base 100ms + data size
|
||||
time.sleep(processing_time)
|
||||
return {
|
||||
'tx_hash': f'0x{hash(str(tx)) % 1000000:x}',
|
||||
'processing_time': processing_time
|
||||
}
|
||||
|
||||
# Test transactions of different sizes
|
||||
transactions = [
|
||||
{'data': 'small', 'amount': 1.0},
|
||||
{'data': 'x' * 100, 'amount': 10.0}, # 100 bytes
|
||||
{'data': 'x' * 1000, 'amount': 100.0}, # 1KB
|
||||
{'data': 'x' * 10000, 'amount': 1000.0}, # 10KB
|
||||
]
|
||||
|
||||
processing_times = []
|
||||
|
||||
for tx in transactions:
|
||||
start_time = time.time()
|
||||
result = mock_process_transaction(tx)
|
||||
end_time = time.time()
|
||||
|
||||
processing_times.append(result['processing_time'])
|
||||
assert 'tx_hash' in result
|
||||
assert result['processing_time'] > 0
|
||||
|
||||
# Performance assertions
|
||||
assert processing_times[0] < 0.2 # Small transaction < 200ms
|
||||
assert processing_times[-1] < 1.0 # Large transaction < 1 second
|
||||
|
||||
def test_block_validation_performance(self):
|
||||
"""Test block validation performance"""
|
||||
# Mock block validation
|
||||
def mock_validate_block(block):
|
||||
num_transactions = len(block['transactions'])
|
||||
validation_time = num_transactions * 0.01 # 10ms per transaction
|
||||
time.sleep(validation_time)
|
||||
return {
|
||||
'valid': True,
|
||||
'validation_time': validation_time,
|
||||
'transactions_validated': num_transactions
|
||||
}
|
||||
|
||||
# Test blocks with different transaction counts
|
||||
blocks = [
|
||||
{'transactions': [f'tx_{i}' for i in range(10)]}, # 10 transactions
|
||||
{'transactions': [f'tx_{i}' for i in range(50)]}, # 50 transactions
|
||||
{'transactions': [f'tx_{i}' for i in range(100)]}, # 100 transactions
|
||||
]
|
||||
|
||||
validation_results = []
|
||||
|
||||
for block in blocks:
|
||||
start_time = time.time()
|
||||
result = mock_validate_block(block)
|
||||
end_time = time.time()
|
||||
|
||||
validation_results.append(result)
|
||||
assert result['valid'] is True
|
||||
assert result['transactions_validated'] == len(block['transactions'])
|
||||
|
||||
# Performance analysis
|
||||
for i, result in enumerate(validation_results):
|
||||
expected_time = len(blocks[i]['transactions']) * 0.01
|
||||
assert abs(result['validation_time'] - expected_time) < 0.01
|
||||
|
||||
def test_sync_performance(self):
|
||||
"""Test blockchain sync performance"""
|
||||
# Mock blockchain sync
|
||||
def mock_sync_blocks(start_block, end_block):
|
||||
num_blocks = end_block - start_block
|
||||
sync_time = num_blocks * 0.05 # 50ms per block
|
||||
time.sleep(sync_time)
|
||||
return {
|
||||
'synced_blocks': num_blocks,
|
||||
'sync_time': sync_time,
|
||||
'blocks_per_second': num_blocks / sync_time
|
||||
}
|
||||
|
||||
# Test different sync ranges
|
||||
sync_ranges = [
|
||||
(1000, 1010), # 10 blocks
|
||||
(1000, 1050), # 50 blocks
|
||||
(1000, 1100), # 100 blocks
|
||||
]
|
||||
|
||||
sync_results = []
|
||||
|
||||
for start, end in sync_ranges:
|
||||
result = mock_sync_blocks(start, end)
|
||||
sync_results.append(result)
|
||||
|
||||
assert result['synced_blocks'] == (end - start)
|
||||
assert result['blocks_per_second'] > 10 # Should sync at least 10 blocks/second
|
||||
|
||||
# Performance consistency
|
||||
sync_rates = [result['blocks_per_second'] for result in sync_results]
|
||||
avg_sync_rate = statistics.mean(sync_rates)
|
||||
assert avg_sync_rate > 15 # Average should be at least 15 blocks/second
|
||||
|
||||
|
||||
class TestSystemResourcePerformance:
|
||||
"""Test system resource utilization"""
|
||||
|
||||
def test_cpu_utilization(self):
|
||||
"""Test CPU utilization under load"""
|
||||
import psutil
|
||||
import os
|
||||
|
||||
# Get initial CPU usage
|
||||
initial_cpu = psutil.cpu_percent(interval=0.1)
|
||||
|
||||
# CPU-intensive task
|
||||
def cpu_intensive_task():
|
||||
result = 0
|
||||
for i in range(1000000):
|
||||
result += i * i
|
||||
return result
|
||||
|
||||
# Run CPU-intensive task
|
||||
start_time = time.time()
|
||||
cpu_intensive_task()
|
||||
end_time = time.time()
|
||||
|
||||
# Get CPU usage during task
|
||||
cpu_usage = psutil.cpu_percent(interval=0.1)
|
||||
|
||||
# Performance assertions
|
||||
execution_time = end_time - start_time
|
||||
assert execution_time < 5.0 # Should complete in under 5 seconds
|
||||
assert cpu_usage > 0 # Should show CPU usage
|
||||
|
||||
def test_disk_io_performance(self):
|
||||
"""Test disk I/O performance"""
|
||||
import tempfile
|
||||
from pathlib import Path
|
||||
|
||||
with tempfile.TemporaryDirectory() as temp_dir:
|
||||
temp_path = Path(temp_dir)
|
||||
|
||||
# Test write performance
|
||||
test_data = 'x' * (1024 * 1024) # 1MB of data
|
||||
write_times = []
|
||||
|
||||
for i in range(10):
|
||||
file_path = temp_path / f"test_file_{i}.txt"
|
||||
start_time = time.time()
|
||||
|
||||
with open(file_path, 'w') as f:
|
||||
f.write(test_data)
|
||||
|
||||
end_time = time.time()
|
||||
write_times.append(end_time - start_time)
|
||||
|
||||
# Test read performance
|
||||
read_times = []
|
||||
|
||||
for i in range(10):
|
||||
file_path = temp_path / f"test_file_{i}.txt"
|
||||
start_time = time.time()
|
||||
|
||||
with open(file_path, 'r') as f:
|
||||
data = f.read()
|
||||
|
||||
end_time = time.time()
|
||||
read_times.append(end_time - start_time)
|
||||
assert len(data) == len(test_data)
|
||||
|
||||
# Performance analysis
|
||||
avg_write_time = statistics.mean(write_times)
|
||||
avg_read_time = statistics.mean(read_times)
|
||||
|
||||
assert avg_write_time < 0.1 # Write should be under 100ms per MB
|
||||
assert avg_read_time < 0.05 # Read should be under 50ms per MB
|
||||
|
||||
def test_network_performance(self):
|
||||
"""Test network I/O performance"""
|
||||
# Mock network operations
|
||||
def mock_network_request(size_kb):
|
||||
# Simulate network latency and bandwidth
|
||||
latency = 0.01 # 10ms latency
|
||||
bandwidth_time = size_kb / 1000 # 1MB/s bandwidth
|
||||
total_time = latency + bandwidth_time
|
||||
time.sleep(total_time)
|
||||
return {'size': size_kb, 'time': total_time}
|
||||
|
||||
# Test different request sizes
|
||||
request_sizes = [10, 100, 1000] # KB
|
||||
network_results = []
|
||||
|
||||
for size in request_sizes:
|
||||
result = mock_network_request(size)
|
||||
network_results.append(result)
|
||||
|
||||
assert result['size'] == size
|
||||
assert result['time'] > 0
|
||||
|
||||
# Performance analysis
|
||||
throughputs = [size / result['time'] for size, result in zip(request_sizes, network_results)]
|
||||
avg_throughput = statistics.mean(throughputs)
|
||||
|
||||
assert avg_throughput > 500 # Should achieve at least 500 KB/s
|
||||
|
||||
|
||||
class TestScalabilityMetrics:
|
||||
"""Test system scalability metrics"""
|
||||
|
||||
def test_load_scaling(self):
|
||||
"""Test system behavior under increasing load"""
|
||||
# Mock system under different loads
|
||||
def mock_system_load(load_factor):
|
||||
# Simulate increasing response times with load
|
||||
base_response_time = 0.1
|
||||
load_response_time = base_response_time * (1 + load_factor * 0.1)
|
||||
time.sleep(load_response_time)
|
||||
return {
|
||||
'load_factor': load_factor,
|
||||
'response_time': load_response_time,
|
||||
'throughput': 1 / load_response_time
|
||||
}
|
||||
|
||||
# Test different load factors
|
||||
load_factors = [1, 2, 5, 10] # 1x, 2x, 5x, 10x load
|
||||
scaling_results = []
|
||||
|
||||
for load in load_factors:
|
||||
result = mock_system_load(load)
|
||||
scaling_results.append(result)
|
||||
|
||||
assert result['load_factor'] == load
|
||||
assert result['response_time'] > 0
|
||||
assert result['throughput'] > 0
|
||||
|
||||
# Scalability analysis
|
||||
response_times = [r['response_time'] for r in scaling_results]
|
||||
throughputs = [r['throughput'] for r in scaling_results]
|
||||
|
||||
# Check that response times increase reasonably
|
||||
assert response_times[-1] < response_times[0] * 5 # Should not be 5x slower at 10x load
|
||||
|
||||
# Check that throughput degrades gracefully
|
||||
assert throughputs[-1] > throughputs[0] / 5 # Should maintain at least 20% of peak throughput
|
||||
|
||||
def test_resource_efficiency(self):
|
||||
"""Test resource efficiency metrics"""
|
||||
# Mock resource usage
|
||||
def mock_resource_usage(requests_per_second):
|
||||
# Simulate resource usage scaling
|
||||
cpu_usage = min(90, requests_per_second * 2) # 2% CPU per request/sec
|
||||
memory_usage = min(80, 50 + requests_per_second * 0.5) # Base 50% + 0.5% per request/sec
|
||||
return {
|
||||
'requests_per_second': requests_per_second,
|
||||
'cpu_usage': cpu_usage,
|
||||
'memory_usage': memory_usage,
|
||||
'efficiency': requests_per_second / max(cpu_usage, memory_usage)
|
||||
}
|
||||
|
||||
# Test different request rates
|
||||
request_rates = [10, 25, 50, 100] # requests per second
|
||||
efficiency_results = []
|
||||
|
||||
for rate in request_rates:
|
||||
result = mock_resource_usage(rate)
|
||||
efficiency_results.append(result)
|
||||
|
||||
assert result['requests_per_second'] == rate
|
||||
assert result['cpu_usage'] <= 100
|
||||
assert result['memory_usage'] <= 100
|
||||
|
||||
# Efficiency analysis
|
||||
efficiencies = [r['efficiency'] for r in efficiency_results]
|
||||
max_efficiency = max(efficiencies)
|
||||
|
||||
assert max_efficiency > 1.0 # Should achieve reasonable efficiency
|
||||
505
tests/testing/test_performance_lightweight.py
Executable file
505
tests/testing/test_performance_lightweight.py
Executable file
@@ -0,0 +1,505 @@
|
||||
"""
|
||||
Performance Tests for AITBC Chain Management and Analytics
|
||||
Tests system performance under various load conditions (lightweight version)
|
||||
"""
|
||||
|
||||
import pytest
|
||||
import asyncio
|
||||
import json
|
||||
import time
|
||||
import threading
|
||||
import statistics
|
||||
from datetime import datetime, timedelta
|
||||
from pathlib import Path
|
||||
import subprocess
|
||||
import requests
|
||||
from concurrent.futures import ThreadPoolExecutor, as_completed
|
||||
from typing import Dict, Any, List, Tuple
|
||||
import os
|
||||
import resource
|
||||
|
||||
class TestPerformance:
|
||||
"""Performance testing suite for AITBC components"""
|
||||
|
||||
@pytest.fixture(scope="class")
|
||||
def performance_config(self):
|
||||
"""Performance test configuration"""
|
||||
return {
|
||||
"base_url": "http://localhost",
|
||||
"ports": {
|
||||
"coordinator": 8001,
|
||||
"blockchain": 8007,
|
||||
"consensus": 8002,
|
||||
"network": 8008,
|
||||
"explorer": 8016,
|
||||
"wallet_daemon": 8003,
|
||||
"exchange": 8010,
|
||||
"oracle": 8011,
|
||||
"trading": 8012,
|
||||
"compliance": 8015,
|
||||
"plugin_registry": 8013,
|
||||
"plugin_marketplace": 8014,
|
||||
"global_infrastructure": 8017,
|
||||
"ai_agents": 8018,
|
||||
"load_balancer": 8019
|
||||
},
|
||||
"performance_thresholds": {
|
||||
"response_time_p95": 2000, # 95th percentile < 2 seconds
|
||||
"response_time_p99": 5000, # 99th percentile < 5 seconds
|
||||
"error_rate": 0.01, # < 1% error rate
|
||||
"throughput_min": 50, # Minimum 50 requests/second
|
||||
"cli_response_max": 5000 # CLI max response time < 5 seconds
|
||||
}
|
||||
}
|
||||
|
||||
def get_memory_usage(self):
|
||||
"""Get current memory usage (lightweight version)"""
|
||||
try:
|
||||
# Using resource module for memory usage
|
||||
usage = resource.getrusage(resource.RUSAGE_SELF)
|
||||
return usage.ru_maxrss / 1024 # Convert to MB (on Linux)
|
||||
except:
|
||||
return 0
|
||||
|
||||
def get_cpu_usage(self):
|
||||
"""Get CPU usage (lightweight version)"""
|
||||
try:
|
||||
# Simple CPU usage calculation
|
||||
start_time = time.time()
|
||||
while time.time() - start_time < 0.1: # Sample for 0.1 seconds
|
||||
pass
|
||||
return 0 # Simplified - would need more complex implementation for accurate CPU
|
||||
except:
|
||||
return 0
|
||||
|
||||
def test_cli_performance(self, performance_config):
|
||||
"""Test CLI command performance"""
|
||||
cli_commands = [
|
||||
["--help"],
|
||||
["wallet", "--help"],
|
||||
["blockchain", "--help"],
|
||||
["multisig", "--help"],
|
||||
["genesis-protection", "--help"],
|
||||
["transfer-control", "--help"],
|
||||
["compliance", "--help"],
|
||||
["exchange", "--help"],
|
||||
["oracle", "--help"],
|
||||
["market-maker", "--help"]
|
||||
]
|
||||
|
||||
response_times = []
|
||||
memory_usage_before = self.get_memory_usage()
|
||||
|
||||
for command in cli_commands:
|
||||
start_time = time.time()
|
||||
|
||||
result = subprocess.run(
|
||||
["python", "-m", "aitbc_cli.main"] + command,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
cwd="/home/oib/windsurf/aitbc/cli"
|
||||
)
|
||||
|
||||
end_time = time.time()
|
||||
response_time = (end_time - start_time) * 1000 # Convert to milliseconds
|
||||
|
||||
assert result.returncode == 0, f"CLI command failed: {' '.join(command)}"
|
||||
assert response_time < performance_config["performance_thresholds"]["cli_response_max"], \
|
||||
f"CLI command too slow: {response_time:.2f}ms"
|
||||
|
||||
response_times.append(response_time)
|
||||
|
||||
memory_usage_after = self.get_memory_usage()
|
||||
memory_increase = memory_usage_after - memory_usage_before
|
||||
|
||||
# Calculate performance statistics
|
||||
avg_response_time = statistics.mean(response_times)
|
||||
p95_response_time = statistics.quantiles(response_times, n=20)[18] if len(response_times) > 20 else max(response_times)
|
||||
max_response_time = max(response_times)
|
||||
|
||||
# Performance assertions
|
||||
assert avg_response_time < 1000, f"Average CLI response time too high: {avg_response_time:.2f}ms"
|
||||
assert p95_response_time < 3000, f"95th percentile CLI response time too high: {p95_response_time:.2f}ms"
|
||||
assert max_response_time < 10000, f"Maximum CLI response time too high: {max_response_time:.2f}ms"
|
||||
assert memory_increase < 100, f"Memory usage increased too much: {memory_increase:.1f}MB"
|
||||
|
||||
print(f"CLI Performance Results:")
|
||||
print(f" Average: {avg_response_time:.2f}ms")
|
||||
print(f" 95th percentile: {p95_response_time:.2f}ms")
|
||||
print(f" Maximum: {max_response_time:.2f}ms")
|
||||
print(f" Memory increase: {memory_increase:.1f}MB")
|
||||
|
||||
def test_concurrent_cli_operations(self, performance_config):
|
||||
"""Test concurrent CLI operations"""
|
||||
def run_cli_command(command):
|
||||
start_time = time.time()
|
||||
result = subprocess.run(
|
||||
["python", "-m", "aitbc_cli.main"] + command,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
cwd="/home/oib/windsurf/aitbc/cli"
|
||||
)
|
||||
end_time = time.time()
|
||||
return {
|
||||
"command": command,
|
||||
"success": result.returncode == 0,
|
||||
"response_time": (end_time - start_time) * 1000,
|
||||
"output_length": len(result.stdout)
|
||||
}
|
||||
|
||||
# Test concurrent operations
|
||||
commands_to_test = [
|
||||
["wallet", "--help"],
|
||||
["blockchain", "--help"],
|
||||
["multisig", "--help"],
|
||||
["compliance", "--help"],
|
||||
["exchange", "--help"]
|
||||
]
|
||||
|
||||
with ThreadPoolExecutor(max_workers=10) as executor:
|
||||
# Submit multiple concurrent requests
|
||||
futures = []
|
||||
for _ in range(20): # 20 concurrent operations
|
||||
for command in commands_to_test:
|
||||
future = executor.submit(run_cli_command, command)
|
||||
futures.append(future)
|
||||
|
||||
# Collect results
|
||||
results = []
|
||||
for future in as_completed(futures):
|
||||
result = future.result()
|
||||
results.append(result)
|
||||
|
||||
# Analyze results
|
||||
successful_operations = [r for r in results if r["success"]]
|
||||
response_times = [r["response_time"] for r in successful_operations]
|
||||
|
||||
success_rate = len(successful_operations) / len(results)
|
||||
avg_response_time = statistics.mean(response_times) if response_times else 0
|
||||
p95_response_time = statistics.quantiles(response_times, n=20)[18] if len(response_times) > 20 else max(response_times) if response_times else 0
|
||||
|
||||
# Performance assertions
|
||||
assert success_rate >= 0.95, f"Low success rate: {success_rate:.2%}"
|
||||
assert avg_response_time < 2000, f"Average response time too high: {avg_response_time:.2f}ms"
|
||||
assert p95_response_time < 5000, f"95th percentile response time too high: {p95_response_time:.2f}ms"
|
||||
|
||||
print(f"Concurrent CLI Operations Results:")
|
||||
print(f" Success rate: {success_rate:.2%}")
|
||||
print(f" Average response time: {avg_response_time:.2f}ms")
|
||||
print(f" 95th percentile: {p95_response_time:.2f}ms")
|
||||
print(f" Total operations: {len(results)}")
|
||||
|
||||
def test_cli_memory_efficiency(self, performance_config):
|
||||
"""Test CLI memory efficiency"""
|
||||
memory_samples = []
|
||||
|
||||
def monitor_memory():
|
||||
for _ in range(10):
|
||||
memory_usage = self.get_memory_usage()
|
||||
memory_samples.append(memory_usage)
|
||||
time.sleep(0.5)
|
||||
|
||||
def run_cli_operations():
|
||||
commands = [
|
||||
["wallet", "--help"],
|
||||
["blockchain", "--help"],
|
||||
["multisig", "--help"],
|
||||
["genesis-protection", "--help"],
|
||||
["transfer-control", "--help"],
|
||||
["compliance", "--help"],
|
||||
["exchange", "--help"],
|
||||
["oracle", "--help"],
|
||||
["market-maker", "--help"]
|
||||
]
|
||||
|
||||
for _ in range(5): # Run commands multiple times
|
||||
for command in commands:
|
||||
subprocess.run(
|
||||
["python", "-m", "aitbc_cli.main"] + command,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
cwd="/home/oib/windsurf/aitbc/cli"
|
||||
)
|
||||
|
||||
# Monitor memory during operations
|
||||
monitor_thread = threading.Thread(target=monitor_memory)
|
||||
operation_thread = threading.Thread(target=run_cli_operations)
|
||||
|
||||
monitor_thread.start()
|
||||
operation_thread.start()
|
||||
|
||||
monitor_thread.join()
|
||||
operation_thread.join()
|
||||
|
||||
# Analyze memory usage
|
||||
if memory_samples:
|
||||
avg_memory = statistics.mean(memory_samples)
|
||||
max_memory = max(memory_samples)
|
||||
memory_variance = statistics.variance(memory_samples) if len(memory_samples) > 1 else 0
|
||||
|
||||
# Memory efficiency assertions
|
||||
assert max_memory - min(memory_samples) < 50, f"Memory usage variance too high: {max_memory - min(memory_samples):.1f}MB"
|
||||
assert avg_memory < 200, f"Average memory usage too high: {avg_memory:.1f}MB"
|
||||
|
||||
print(f"CLI Memory Efficiency Results:")
|
||||
print(f" Average memory: {avg_memory:.1f}MB")
|
||||
print(f" Maximum memory: {max_memory:.1f}MB")
|
||||
print(f" Memory variance: {memory_variance:.1f}")
|
||||
|
||||
def test_cli_throughput(self, performance_config):
|
||||
"""Test CLI command throughput"""
|
||||
def measure_throughput():
|
||||
commands = [
|
||||
["wallet", "--help"],
|
||||
["blockchain", "--help"],
|
||||
["multisig", "--help"]
|
||||
]
|
||||
|
||||
start_time = time.time()
|
||||
successful_operations = 0
|
||||
|
||||
for i in range(100): # 100 operations
|
||||
command = commands[i % len(commands)]
|
||||
result = subprocess.run(
|
||||
["python", "-m", "aitbc_cli.main"] + command,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
cwd="/home/oib/windsurf/aitbc/cli"
|
||||
)
|
||||
|
||||
if result.returncode == 0:
|
||||
successful_operations += 1
|
||||
|
||||
end_time = time.time()
|
||||
duration = end_time - start_time
|
||||
throughput = successful_operations / duration # operations per second
|
||||
|
||||
return {
|
||||
"total_operations": 100,
|
||||
"successful_operations": successful_operations,
|
||||
"duration": duration,
|
||||
"throughput": throughput
|
||||
}
|
||||
|
||||
# Run throughput test
|
||||
result = measure_throughput()
|
||||
|
||||
# Throughput assertions
|
||||
assert result["successful_operations"] >= 95, f"Too many failed operations: {result['successful_operations']}/100"
|
||||
assert result["throughput"] >= 10, f"Throughput too low: {result['throughput']:.2f} ops/s"
|
||||
assert result["duration"] < 30, f"Test took too long: {result['duration']:.2f}s"
|
||||
|
||||
print(f"CLI Throughput Results:")
|
||||
print(f" Successful operations: {result['successful_operations']}/100")
|
||||
print(f" Duration: {result['duration']:.2f}s")
|
||||
print(f" Throughput: {result['throughput']:.2f} ops/s")
|
||||
|
||||
def test_cli_response_time_distribution(self, performance_config):
|
||||
"""Test CLI response time distribution"""
|
||||
commands = [
|
||||
["--help"],
|
||||
["wallet", "--help"],
|
||||
["blockchain", "--help"],
|
||||
["multisig", "--help"],
|
||||
["genesis-protection", "--help"],
|
||||
["transfer-control", "--help"],
|
||||
["compliance", "--help"],
|
||||
["exchange", "--help"],
|
||||
["oracle", "--help"],
|
||||
["market-maker", "--help"]
|
||||
]
|
||||
|
||||
response_times = []
|
||||
|
||||
# Run each command multiple times
|
||||
for command in commands:
|
||||
for _ in range(10): # 10 times per command
|
||||
start_time = time.time()
|
||||
|
||||
result = subprocess.run(
|
||||
["python", "-m", "aitbc_cli.main"] + command,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
cwd="/home/oib/windsurf/aitbc/cli"
|
||||
)
|
||||
|
||||
end_time = time.time()
|
||||
response_time = (end_time - start_time) * 1000
|
||||
|
||||
assert result.returncode == 0, f"CLI command failed: {' '.join(command)}"
|
||||
response_times.append(response_time)
|
||||
|
||||
# Calculate distribution statistics
|
||||
min_time = min(response_times)
|
||||
max_time = max(response_times)
|
||||
mean_time = statistics.mean(response_times)
|
||||
median_time = statistics.median(response_times)
|
||||
std_dev = statistics.stdev(response_times)
|
||||
|
||||
# Percentiles
|
||||
sorted_times = sorted(response_times)
|
||||
p50 = sorted_times[len(sorted_times) // 2]
|
||||
p90 = sorted_times[int(len(sorted_times) * 0.9)]
|
||||
p95 = sorted_times[int(len(sorted_times) * 0.95)]
|
||||
p99 = sorted_times[int(len(sorted_times) * 0.99)]
|
||||
|
||||
# Distribution assertions
|
||||
assert mean_time < 1000, f"Mean response time too high: {mean_time:.2f}ms"
|
||||
assert p95 < 3000, f"95th percentile too high: {p95:.2f}ms"
|
||||
assert p99 < 5000, f"99th percentile too high: {p99:.2f}ms"
|
||||
assert std_dev < mean_time, f"Standard deviation too high: {std_dev:.2f}ms"
|
||||
|
||||
print(f"CLI Response Time Distribution:")
|
||||
print(f" Min: {min_time:.2f}ms")
|
||||
print(f" Max: {max_time:.2f}ms")
|
||||
print(f" Mean: {mean_time:.2f}ms")
|
||||
print(f" Median: {median_time:.2f}ms")
|
||||
print(f" Std Dev: {std_dev:.2f}ms")
|
||||
print(f" 50th percentile: {p50:.2f}ms")
|
||||
print(f" 90th percentile: {p90:.2f}ms")
|
||||
print(f" 95th percentile: {p95:.2f}ms")
|
||||
print(f" 99th percentile: {p99:.2f}ms")
|
||||
|
||||
def test_cli_scalability(self, performance_config):
|
||||
"""Test CLI scalability with increasing load"""
|
||||
def test_load_level(num_concurrent, operations_per_thread):
|
||||
def worker():
|
||||
commands = [["--help"], ["wallet", "--help"], ["blockchain", "--help"]]
|
||||
results = []
|
||||
|
||||
for i in range(operations_per_thread):
|
||||
command = commands[i % len(commands)]
|
||||
start_time = time.time()
|
||||
|
||||
result = subprocess.run(
|
||||
["python", "-m", "aitbc_cli.main"] + command,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
cwd="/home/oib/windsurf/aitbc/cli"
|
||||
)
|
||||
|
||||
end_time = time.time()
|
||||
results.append({
|
||||
"success": result.returncode == 0,
|
||||
"response_time": (end_time - start_time) * 1000
|
||||
})
|
||||
|
||||
return results
|
||||
|
||||
with ThreadPoolExecutor(max_workers=num_concurrent) as executor:
|
||||
futures = [executor.submit(worker) for _ in range(num_concurrent)]
|
||||
all_results = []
|
||||
|
||||
for future in as_completed(futures):
|
||||
worker_results = future.result()
|
||||
all_results.extend(worker_results)
|
||||
|
||||
# Analyze results
|
||||
successful = [r for r in all_results if r["success"]]
|
||||
response_times = [r["response_time"] for r in successful]
|
||||
|
||||
if response_times:
|
||||
success_rate = len(successful) / len(all_results)
|
||||
avg_response_time = statistics.mean(response_times)
|
||||
|
||||
return {
|
||||
"total_operations": len(all_results),
|
||||
"successful_operations": len(successful),
|
||||
"success_rate": success_rate,
|
||||
"avg_response_time": avg_response_time
|
||||
}
|
||||
|
||||
# Test different load levels
|
||||
load_levels = [
|
||||
(1, 50), # 1 thread, 50 operations
|
||||
(2, 50), # 2 threads, 50 operations each
|
||||
(5, 20), # 5 threads, 20 operations each
|
||||
(10, 10) # 10 threads, 10 operations each
|
||||
]
|
||||
|
||||
results = {}
|
||||
|
||||
for num_threads, ops_per_thread in load_levels:
|
||||
result = test_load_level(num_threads, ops_per_thread)
|
||||
results[f"{num_threads}x{ops_per_thread}"] = result
|
||||
|
||||
# Scalability assertions
|
||||
assert result["success_rate"] >= 0.90, f"Low success rate at {num_threads}x{ops_per_thread}: {result['success_rate']:.2%}"
|
||||
assert result["avg_response_time"] < 3000, f"Response time too high at {num_threads}x{ops_per_thread}: {result['avg_response_time']:.2f}ms"
|
||||
|
||||
print(f"CLI Scalability Results:")
|
||||
for load_level, result in results.items():
|
||||
print(f" {load_level}: {result['success_rate']:.2%} success, {result['avg_response_time']:.2f}ms avg")
|
||||
|
||||
def test_cli_error_handling_performance(self, performance_config):
|
||||
"""Test CLI error handling performance"""
|
||||
# Test invalid commands
|
||||
invalid_commands = [
|
||||
["--invalid-option"],
|
||||
["wallet", "--invalid-subcommand"],
|
||||
["blockchain", "invalid-subcommand"],
|
||||
["nonexistent-command"]
|
||||
]
|
||||
|
||||
response_times = []
|
||||
|
||||
for command in invalid_commands:
|
||||
start_time = time.time()
|
||||
|
||||
result = subprocess.run(
|
||||
["python", "-m", "aitbc_cli.main"] + command,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
cwd="/home/oib/windsurf/aitbc/cli"
|
||||
)
|
||||
|
||||
end_time = time.time()
|
||||
response_time = (end_time - start_time) * 1000
|
||||
|
||||
# Should fail gracefully
|
||||
assert result.returncode != 0, f"Invalid command should fail: {' '.join(command)}"
|
||||
assert response_time < 2000, f"Error handling too slow: {response_time:.2f}ms"
|
||||
|
||||
response_times.append(response_time)
|
||||
|
||||
avg_error_response_time = statistics.mean(response_times)
|
||||
max_error_response_time = max(response_times)
|
||||
|
||||
# Error handling performance assertions
|
||||
assert avg_error_response_time < 1000, f"Average error response time too high: {avg_error_response_time:.2f}ms"
|
||||
assert max_error_response_time < 2000, f"Maximum error response time too high: {max_error_response_time:.2f}ms"
|
||||
|
||||
print(f"CLI Error Handling Performance:")
|
||||
print(f" Average error response time: {avg_error_response_time:.2f}ms")
|
||||
print(f" Maximum error response time: {max_error_response_time:.2f}ms")
|
||||
|
||||
class TestServicePerformance:
|
||||
"""Test service performance (when services are available)"""
|
||||
|
||||
def test_service_health_performance(self, performance_config):
|
||||
"""Test service health endpoint performance"""
|
||||
services_to_test = {
|
||||
"global_infrastructure": performance_config["ports"]["global_infrastructure"],
|
||||
"consensus": performance_config["ports"]["consensus"]
|
||||
}
|
||||
|
||||
for service_name, port in services_to_test.items():
|
||||
try:
|
||||
start_time = time.time()
|
||||
response = requests.get(f"{performance_config['base_url']}:{port}/health", timeout=5)
|
||||
end_time = time.time()
|
||||
|
||||
response_time = (end_time - start_time) * 1000
|
||||
|
||||
if response.status_code == 200:
|
||||
assert response_time < 1000, f"{service_name} health endpoint too slow: {response_time:.2f}ms"
|
||||
print(f"✅ {service_name} health: {response_time:.2f}ms")
|
||||
else:
|
||||
print(f"⚠️ {service_name} health returned {response.status_code}")
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ {service_name} health check failed: {str(e)}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Run performance tests
|
||||
pytest.main([__file__, "-v", "--tb=short"])
|
||||
693
tests/testing/test_pricing_performance.py
Executable file
693
tests/testing/test_pricing_performance.py
Executable file
@@ -0,0 +1,693 @@
|
||||
"""
|
||||
Performance Tests for Dynamic Pricing System
|
||||
Tests system performance under load and stress conditions
|
||||
"""
|
||||
|
||||
import pytest
|
||||
import asyncio
|
||||
import time
|
||||
import psutil
|
||||
import threading
|
||||
from datetime import datetime, timedelta
|
||||
from concurrent.futures import ThreadPoolExecutor
|
||||
from unittest.mock import Mock, patch
|
||||
import statistics
|
||||
|
||||
from app.services.dynamic_pricing_engine import DynamicPricingEngine, PricingStrategy, ResourceType
|
||||
from app.services.market_data_collector import MarketDataCollector
|
||||
|
||||
|
||||
class TestPricingPerformance:
|
||||
"""Performance tests for the dynamic pricing system"""
|
||||
|
||||
@pytest.fixture
|
||||
def pricing_engine(self):
|
||||
"""Create pricing engine optimized for performance testing"""
|
||||
config = {
|
||||
"min_price": 0.001,
|
||||
"max_price": 1000.0,
|
||||
"update_interval": 60,
|
||||
"forecast_horizon": 24,
|
||||
"max_volatility_threshold": 0.3,
|
||||
"circuit_breaker_threshold": 0.5
|
||||
}
|
||||
engine = DynamicPricingEngine(config)
|
||||
return engine
|
||||
|
||||
@pytest.fixture
|
||||
def market_collector(self):
|
||||
"""Create market data collector for performance testing"""
|
||||
config = {
|
||||
"websocket_port": 8767
|
||||
}
|
||||
collector = MarketDataCollector(config)
|
||||
return collector
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_single_pricing_calculation_performance(self, pricing_engine):
|
||||
"""Test performance of individual pricing calculations"""
|
||||
|
||||
await pricing_engine.initialize()
|
||||
|
||||
# Measure single calculation time
|
||||
start_time = time.time()
|
||||
|
||||
result = await pricing_engine.calculate_dynamic_price(
|
||||
resource_id="perf_test_gpu",
|
||||
resource_type=ResourceType.GPU,
|
||||
base_price=0.05,
|
||||
strategy=PricingStrategy.MARKET_BALANCE
|
||||
)
|
||||
|
||||
end_time = time.time()
|
||||
calculation_time = end_time - start_time
|
||||
|
||||
# Performance assertions
|
||||
assert calculation_time < 0.1 # Should complete within 100ms
|
||||
assert result.recommended_price > 0
|
||||
assert result.confidence_score > 0
|
||||
|
||||
print(f"Single calculation time: {calculation_time:.4f}s")
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_concurrent_pricing_calculations(self, pricing_engine):
|
||||
"""Test performance of concurrent pricing calculations"""
|
||||
|
||||
await pricing_engine.initialize()
|
||||
|
||||
num_concurrent = 100
|
||||
num_iterations = 10
|
||||
|
||||
all_times = []
|
||||
|
||||
for iteration in range(num_iterations):
|
||||
# Create concurrent tasks
|
||||
tasks = []
|
||||
start_time = time.time()
|
||||
|
||||
for i in range(num_concurrent):
|
||||
task = pricing_engine.calculate_dynamic_price(
|
||||
resource_id=f"concurrent_perf_gpu_{iteration}_{i}",
|
||||
resource_type=ResourceType.GPU,
|
||||
base_price=0.05,
|
||||
strategy=PricingStrategy.MARKET_BALANCE
|
||||
)
|
||||
tasks.append(task)
|
||||
|
||||
# Execute all tasks concurrently
|
||||
results = await asyncio.gather(*tasks)
|
||||
|
||||
end_time = time.time()
|
||||
iteration_time = end_time - start_time
|
||||
all_times.append(iteration_time)
|
||||
|
||||
# Verify all calculations completed successfully
|
||||
assert len(results) == num_concurrent
|
||||
for result in results:
|
||||
assert result.recommended_price > 0
|
||||
assert result.confidence_score > 0
|
||||
|
||||
print(f"Iteration {iteration + 1}: {num_concurrent} calculations in {iteration_time:.4f}s")
|
||||
|
||||
# Performance analysis
|
||||
avg_time = statistics.mean(all_times)
|
||||
min_time = min(all_times)
|
||||
max_time = max(all_times)
|
||||
std_dev = statistics.stdev(all_times)
|
||||
|
||||
print(f"Concurrent performance stats:")
|
||||
print(f" Average time: {avg_time:.4f}s")
|
||||
print(f" Min time: {min_time:.4f}s")
|
||||
print(f" Max time: {max_time:.4f}s")
|
||||
print(f" Std deviation: {std_dev:.4f}s")
|
||||
|
||||
# Performance assertions
|
||||
assert avg_time < 2.0 # Should complete 100 calculations within 2 seconds
|
||||
assert std_dev < 0.5 # Low variance in performance
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_high_volume_pricing_calculations(self, pricing_engine):
|
||||
"""Test performance under high volume load"""
|
||||
|
||||
await pricing_engine.initialize()
|
||||
|
||||
num_calculations = 1000
|
||||
batch_size = 50
|
||||
|
||||
start_time = time.time()
|
||||
|
||||
# Process in batches to avoid overwhelming the system
|
||||
for batch_start in range(0, num_calculations, batch_size):
|
||||
batch_end = min(batch_start + batch_size, num_calculations)
|
||||
|
||||
tasks = []
|
||||
for i in range(batch_start, batch_end):
|
||||
task = pricing_engine.calculate_dynamic_price(
|
||||
resource_id=f"high_volume_gpu_{i}",
|
||||
resource_type=ResourceType.GPU,
|
||||
base_price=0.05,
|
||||
strategy=PricingStrategy.MARKET_BALANCE
|
||||
)
|
||||
tasks.append(task)
|
||||
|
||||
await asyncio.gather(*tasks)
|
||||
|
||||
end_time = time.time()
|
||||
total_time = end_time - start_time
|
||||
calculations_per_second = num_calculations / total_time
|
||||
|
||||
print(f"High volume test:")
|
||||
print(f" {num_calculations} calculations in {total_time:.2f}s")
|
||||
print(f" {calculations_per_second:.2f} calculations/second")
|
||||
|
||||
# Performance assertions
|
||||
assert calculations_per_second > 50 # Should handle at least 50 calculations per second
|
||||
assert total_time < 30 # Should complete within 30 seconds
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_forecast_generation_performance(self, pricing_engine):
|
||||
"""Test performance of price forecast generation"""
|
||||
|
||||
await pricing_engine.initialize()
|
||||
|
||||
# Add historical data for forecasting
|
||||
base_time = datetime.utcnow()
|
||||
for i in range(100): # 100 data points
|
||||
pricing_engine.pricing_history["forecast_perf_gpu"] = pricing_engine.pricing_history.get("forecast_perf_gpu", [])
|
||||
pricing_engine.pricing_history["forecast_perf_gpu"].append(
|
||||
Mock(
|
||||
price=0.05 + (i * 0.0001),
|
||||
demand_level=0.6 + (i % 10) * 0.02,
|
||||
supply_level=0.7 - (i % 8) * 0.01,
|
||||
confidence=0.8,
|
||||
strategy_used="market_balance",
|
||||
timestamp=base_time - timedelta(hours=100-i)
|
||||
)
|
||||
)
|
||||
|
||||
# Test forecast generation performance
|
||||
forecast_horizons = [24, 48, 72]
|
||||
forecast_times = []
|
||||
|
||||
for horizon in forecast_horizons:
|
||||
start_time = time.time()
|
||||
|
||||
forecast = await pricing_engine.get_price_forecast("forecast_perf_gpu", horizon)
|
||||
|
||||
end_time = time.time()
|
||||
forecast_time = end_time - start_time
|
||||
forecast_times.append(forecast_time)
|
||||
|
||||
assert len(forecast) == horizon
|
||||
print(f"Forecast {horizon}h: {forecast_time:.4f}s ({len(forecast)} points)")
|
||||
|
||||
# Performance assertions
|
||||
avg_forecast_time = statistics.mean(forecast_times)
|
||||
assert avg_forecast_time < 0.5 # Forecasts should complete within 500ms
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_memory_usage_under_load(self, pricing_engine):
|
||||
"""Test memory usage during high load"""
|
||||
|
||||
await pricing_engine.initialize()
|
||||
|
||||
# Measure initial memory usage
|
||||
process = psutil.Process()
|
||||
initial_memory = process.memory_info().rss / 1024 / 1024 # MB
|
||||
|
||||
# Generate high load
|
||||
num_calculations = 500
|
||||
|
||||
for i in range(num_calculations):
|
||||
await pricing_engine.calculate_dynamic_price(
|
||||
resource_id=f"memory_test_gpu_{i}",
|
||||
resource_type=ResourceType.GPU,
|
||||
base_price=0.05,
|
||||
strategy=PricingStrategy.MARKET_BALANCE
|
||||
)
|
||||
|
||||
# Measure memory usage after load
|
||||
final_memory = process.memory_info().rss / 1024 / 1024 # MB
|
||||
memory_increase = final_memory - initial_memory
|
||||
|
||||
print(f"Memory usage test:")
|
||||
print(f" Initial memory: {initial_memory:.2f} MB")
|
||||
print(f" Final memory: {final_memory:.2f} MB")
|
||||
print(f" Memory increase: {memory_increase:.2f} MB")
|
||||
print(f" Memory per calculation: {memory_increase/num_calculations:.4f} MB")
|
||||
|
||||
# Memory assertions
|
||||
assert memory_increase < 100 # Should not increase by more than 100MB
|
||||
assert memory_increase / num_calculations < 0.5 # Less than 0.5MB per calculation
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_market_data_collection_performance(self, market_collector):
|
||||
"""Test performance of market data collection"""
|
||||
|
||||
await market_collector.initialize()
|
||||
|
||||
# Measure data collection performance
|
||||
collection_times = {}
|
||||
|
||||
for source in market_collector.collection_intervals.keys():
|
||||
start_time = time.time()
|
||||
|
||||
await market_collector._collect_from_source(source)
|
||||
|
||||
end_time = time.time()
|
||||
collection_time = end_time - start_time
|
||||
collection_times[source.value] = collection_time
|
||||
|
||||
print(f"Data collection {source.value}: {collection_time:.4f}s")
|
||||
|
||||
# Performance assertions
|
||||
for source, collection_time in collection_times.items():
|
||||
assert collection_time < 1.0 # Each collection should complete within 1 second
|
||||
|
||||
total_collection_time = sum(collection_times.values())
|
||||
assert total_collection_time < 5.0 # All collections should complete within 5 seconds
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_strategy_switching_performance(self, pricing_engine):
|
||||
"""Test performance of strategy switching"""
|
||||
|
||||
await pricing_engine.initialize()
|
||||
|
||||
strategies = [
|
||||
PricingStrategy.AGGRESSIVE_GROWTH,
|
||||
PricingStrategy.PROFIT_MAXIMIZATION,
|
||||
PricingStrategy.MARKET_BALANCE,
|
||||
PricingStrategy.COMPETITIVE_RESPONSE,
|
||||
PricingStrategy.DEMAND_ELASTICITY
|
||||
]
|
||||
|
||||
switch_times = []
|
||||
|
||||
for strategy in strategies:
|
||||
start_time = time.time()
|
||||
|
||||
await pricing_engine.set_provider_strategy(
|
||||
provider_id="switch_test_provider",
|
||||
strategy=strategy
|
||||
)
|
||||
|
||||
# Calculate price with new strategy
|
||||
await pricing_engine.calculate_dynamic_price(
|
||||
resource_id="switch_test_gpu",
|
||||
resource_type=ResourceType.GPU,
|
||||
base_price=0.05,
|
||||
strategy=strategy
|
||||
)
|
||||
|
||||
end_time = time.time()
|
||||
switch_time = end_time - start_time
|
||||
switch_times.append(switch_time)
|
||||
|
||||
print(f"Strategy switch to {strategy.value}: {switch_time:.4f}s")
|
||||
|
||||
# Performance assertions
|
||||
avg_switch_time = statistics.mean(switch_times)
|
||||
assert avg_switch_time < 0.05 # Strategy switches should be very fast
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_circuit_breaker_performance(self, pricing_engine):
|
||||
"""Test circuit breaker performance under stress"""
|
||||
|
||||
await pricing_engine.initialize()
|
||||
|
||||
# Add pricing history
|
||||
base_time = datetime.utcnow()
|
||||
for i in range(10):
|
||||
pricing_engine.pricing_history["circuit_perf_gpu"] = pricing_engine.pricing_history.get("circuit_perf_gpu", [])
|
||||
pricing_engine.pricing_history["circuit_perf_gpu"].append(
|
||||
Mock(
|
||||
price=0.05,
|
||||
timestamp=base_time - timedelta(minutes=10-i),
|
||||
demand_level=0.5,
|
||||
supply_level=0.5,
|
||||
confidence=0.8,
|
||||
strategy_used="market_balance"
|
||||
)
|
||||
)
|
||||
|
||||
# Test circuit breaker activation performance
|
||||
start_time = time.time()
|
||||
|
||||
# Simulate high volatility conditions
|
||||
with patch.object(pricing_engine, '_get_market_conditions') as mock_conditions:
|
||||
mock_conditions.return_value = Mock(
|
||||
demand_level=0.9,
|
||||
supply_level=0.3,
|
||||
price_volatility=0.8, # High volatility
|
||||
utilization_rate=0.95
|
||||
)
|
||||
|
||||
result = await pricing_engine.calculate_dynamic_price(
|
||||
resource_id="circuit_perf_gpu",
|
||||
resource_type=ResourceType.GPU,
|
||||
base_price=0.05,
|
||||
strategy=PricingStrategy.MARKET_BALANCE
|
||||
)
|
||||
|
||||
end_time = time.time()
|
||||
circuit_time = end_time - start_time
|
||||
|
||||
print(f"Circuit breaker activation: {circuit_time:.4f}s")
|
||||
|
||||
# Verify circuit breaker was activated
|
||||
assert "circuit_perf_gpu" in pricing_engine.circuit_breakers
|
||||
assert pricing_engine.circuit_breakers["circuit_perf_gpu"] is True
|
||||
|
||||
# Performance assertions
|
||||
assert circuit_time < 0.1 # Circuit breaker should be very fast
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_price_history_scaling(self, pricing_engine):
|
||||
"""Test performance with large price history"""
|
||||
|
||||
await pricing_engine.initialize()
|
||||
|
||||
# Build large price history
|
||||
num_history_points = 10000
|
||||
resource_id = "scaling_test_gpu"
|
||||
|
||||
print(f"Building {num_history_points} history points...")
|
||||
build_start = time.time()
|
||||
|
||||
base_time = datetime.utcnow()
|
||||
for i in range(num_history_points):
|
||||
pricing_engine.pricing_history[resource_id] = pricing_engine.pricing_history.get(resource_id, [])
|
||||
pricing_engine.pricing_history[resource_id].append(
|
||||
Mock(
|
||||
price=0.05 + (i * 0.00001),
|
||||
demand_level=0.6 + (i % 10) * 0.02,
|
||||
supply_level=0.7 - (i % 8) * 0.01,
|
||||
confidence=0.8,
|
||||
strategy_used="market_balance",
|
||||
timestamp=base_time - timedelta(minutes=num_history_points-i)
|
||||
)
|
||||
)
|
||||
|
||||
build_end = time.time()
|
||||
build_time = build_end - build_start
|
||||
|
||||
print(f"History build time: {build_time:.4f}s")
|
||||
print(f"History size: {len(pricing_engine.pricing_history[resource_id])} points")
|
||||
|
||||
# Test calculation performance with large history
|
||||
calc_start = time.time()
|
||||
|
||||
result = await pricing_engine.calculate_dynamic_price(
|
||||
resource_id=resource_id,
|
||||
resource_type=ResourceType.GPU,
|
||||
base_price=0.05,
|
||||
strategy=PricingStrategy.MARKET_BALANCE
|
||||
)
|
||||
|
||||
calc_end = time.time()
|
||||
calc_time = calc_end - calc_start
|
||||
|
||||
print(f"Calculation with large history: {calc_time:.4f}s")
|
||||
|
||||
# Performance assertions
|
||||
assert build_time < 5.0 # History building should be fast
|
||||
assert calc_time < 0.5 # Calculation should still be fast even with large history
|
||||
assert len(pricing_engine.pricing_history[resource_id]) <= 1000 # Should enforce limit
|
||||
|
||||
def test_thread_safety(self, pricing_engine):
|
||||
"""Test thread safety of pricing calculations"""
|
||||
|
||||
# This test uses threading to simulate concurrent access
|
||||
def calculate_price_thread(thread_id, num_calculations, results):
|
||||
"""Thread function for pricing calculations"""
|
||||
loop = asyncio.new_event_loop()
|
||||
asyncio.set_event_loop(loop)
|
||||
|
||||
try:
|
||||
for i in range(num_calculations):
|
||||
result = loop.run_until_complete(
|
||||
pricing_engine.calculate_dynamic_price(
|
||||
resource_id=f"thread_test_gpu_{thread_id}_{i}",
|
||||
resource_type=ResourceType.GPU,
|
||||
base_price=0.05,
|
||||
strategy=PricingStrategy.MARKET_BALANCE
|
||||
)
|
||||
)
|
||||
results.append((thread_id, i, result.recommended_price))
|
||||
finally:
|
||||
loop.close()
|
||||
|
||||
# Run multiple threads
|
||||
num_threads = 5
|
||||
calculations_per_thread = 20
|
||||
results = []
|
||||
threads = []
|
||||
|
||||
start_time = time.time()
|
||||
|
||||
# Create and start threads
|
||||
for thread_id in range(num_threads):
|
||||
thread = threading.Thread(
|
||||
target=calculate_price_thread,
|
||||
args=(thread_id, calculations_per_thread, results)
|
||||
)
|
||||
threads.append(thread)
|
||||
thread.start()
|
||||
|
||||
# Wait for all threads to complete
|
||||
for thread in threads:
|
||||
thread.join()
|
||||
|
||||
end_time = time.time()
|
||||
total_time = end_time - start_time
|
||||
|
||||
print(f"Thread safety test:")
|
||||
print(f" {num_threads} threads, {calculations_per_thread} calculations each")
|
||||
print(f" Total time: {total_time:.4f}s")
|
||||
print(f" Results: {len(results)} calculations completed")
|
||||
|
||||
# Verify all calculations completed
|
||||
assert len(results) == num_threads * calculations_per_thread
|
||||
|
||||
# Verify no corruption in results
|
||||
for thread_id, calc_id, price in results:
|
||||
assert price > 0
|
||||
assert price < pricing_engine.max_price
|
||||
|
||||
|
||||
class TestLoadTesting:
|
||||
"""Load testing scenarios for the pricing system"""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_sustained_load(self, pricing_engine):
|
||||
"""Test system performance under sustained load"""
|
||||
|
||||
await pricing_engine.initialize()
|
||||
|
||||
# Sustained load parameters
|
||||
duration_seconds = 30
|
||||
calculations_per_second = 50
|
||||
total_calculations = duration_seconds * calculations_per_second
|
||||
|
||||
results = []
|
||||
errors = []
|
||||
|
||||
async def sustained_load_worker():
|
||||
"""Worker for sustained load testing"""
|
||||
for i in range(total_calculations):
|
||||
try:
|
||||
start_time = time.time()
|
||||
|
||||
result = await pricing_engine.calculate_dynamic_price(
|
||||
resource_id=f"sustained_gpu_{i}",
|
||||
resource_type=ResourceType.GPU,
|
||||
base_price=0.05,
|
||||
strategy=PricingStrategy.MARKET_BALANCE
|
||||
)
|
||||
|
||||
end_time = time.time()
|
||||
calculation_time = end_time - start_time
|
||||
|
||||
results.append({
|
||||
"calculation_id": i,
|
||||
"time": calculation_time,
|
||||
"price": result.recommended_price,
|
||||
"confidence": result.confidence_score
|
||||
})
|
||||
|
||||
# Rate limiting
|
||||
await asyncio.sleep(1.0 / calculations_per_second)
|
||||
|
||||
except Exception as e:
|
||||
errors.append({"calculation_id": i, "error": str(e)})
|
||||
|
||||
# Run sustained load test
|
||||
start_time = time.time()
|
||||
await sustained_load_worker()
|
||||
end_time = time.time()
|
||||
|
||||
actual_duration = end_time - start_time
|
||||
|
||||
# Analyze results
|
||||
calculation_times = [r["time"] for r in results]
|
||||
avg_time = statistics.mean(calculation_times)
|
||||
p95_time = sorted(calculation_times)[int(len(calculation_times) * 0.95)]
|
||||
p99_time = sorted(calculation_times)[int(len(calculation_times) * 0.99)]
|
||||
|
||||
print(f"Sustained load test results:")
|
||||
print(f" Duration: {actual_duration:.2f}s (target: {duration_seconds}s)")
|
||||
print(f" Calculations: {len(results)} (target: {total_calculations})")
|
||||
print(f" Errors: {len(errors)}")
|
||||
print(f" Average time: {avg_time:.4f}s")
|
||||
print(f" 95th percentile: {p95_time:.4f}s")
|
||||
print(f" 99th percentile: {p99_time:.4f}s")
|
||||
|
||||
# Performance assertions
|
||||
assert len(errors) == 0 # No errors should occur
|
||||
assert len(results) >= total_calculations * 0.95 # At least 95% of calculations completed
|
||||
assert avg_time < 0.1 # Average calculation time under 100ms
|
||||
assert p95_time < 0.2 # 95th percentile under 200ms
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_burst_load(self, pricing_engine):
|
||||
"""Test system performance under burst load"""
|
||||
|
||||
await pricing_engine.initialize()
|
||||
|
||||
# Burst load parameters
|
||||
num_bursts = 5
|
||||
calculations_per_burst = 100
|
||||
burst_interval = 2 # seconds between bursts
|
||||
|
||||
burst_results = []
|
||||
|
||||
for burst_id in range(num_bursts):
|
||||
print(f"Starting burst {burst_id + 1}/{num_bursts}")
|
||||
|
||||
start_time = time.time()
|
||||
|
||||
# Create burst of calculations
|
||||
tasks = []
|
||||
for i in range(calculations_per_burst):
|
||||
task = pricing_engine.calculate_dynamic_price(
|
||||
resource_id=f"burst_gpu_{burst_id}_{i}",
|
||||
resource_type=ResourceType.GPU,
|
||||
base_price=0.05,
|
||||
strategy=PricingStrategy.MARKET_BALANCE
|
||||
)
|
||||
tasks.append(task)
|
||||
|
||||
# Execute burst
|
||||
results = await asyncio.gather(*tasks)
|
||||
|
||||
end_time = time.time()
|
||||
burst_time = end_time - start_time
|
||||
|
||||
burst_results.append({
|
||||
"burst_id": burst_id,
|
||||
"time": burst_time,
|
||||
"calculations": len(results),
|
||||
"throughput": len(results) / burst_time
|
||||
})
|
||||
|
||||
print(f" Burst {burst_id + 1}: {len(results)} calculations in {burst_time:.4f}s")
|
||||
print(f" Throughput: {len(results) / burst_time:.2f} calc/s")
|
||||
|
||||
# Wait between bursts
|
||||
if burst_id < num_bursts - 1:
|
||||
await asyncio.sleep(burst_interval)
|
||||
|
||||
# Analyze burst performance
|
||||
throughputs = [b["throughput"] for b in burst_results]
|
||||
avg_throughput = statistics.mean(throughputs)
|
||||
min_throughput = min(throughputs)
|
||||
max_throughput = max(throughputs)
|
||||
|
||||
print(f"Burst load test results:")
|
||||
print(f" Average throughput: {avg_throughput:.2f} calc/s")
|
||||
print(f" Min throughput: {min_throughput:.2f} calc/s")
|
||||
print(f" Max throughput: {max_throughput:.2f} calc/s")
|
||||
|
||||
# Performance assertions
|
||||
assert avg_throughput > 100 # Should handle at least 100 calculations per second
|
||||
assert min_throughput > 50 # Even slowest burst should be reasonable
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_stress_testing(self, pricing_engine):
|
||||
"""Stress test with extreme load conditions"""
|
||||
|
||||
await pricing_engine.initialize()
|
||||
|
||||
# Stress test parameters
|
||||
stress_duration = 60 # seconds
|
||||
max_concurrent = 200
|
||||
calculation_interval = 0.01 # very aggressive
|
||||
|
||||
results = []
|
||||
errors = []
|
||||
start_time = time.time()
|
||||
|
||||
async def stress_worker():
|
||||
"""Worker for stress testing"""
|
||||
calculation_id = 0
|
||||
|
||||
while time.time() - start_time < stress_duration:
|
||||
try:
|
||||
# Create batch of concurrent calculations
|
||||
batch_size = min(max_concurrent, 50)
|
||||
tasks = []
|
||||
|
||||
for i in range(batch_size):
|
||||
task = pricing_engine.calculate_dynamic_price(
|
||||
resource_id=f"stress_gpu_{calculation_id}_{i}",
|
||||
resource_type=ResourceType.GPU,
|
||||
base_price=0.05,
|
||||
strategy=PricingStrategy.MARKET_BALANCE
|
||||
)
|
||||
tasks.append(task)
|
||||
|
||||
# Execute batch
|
||||
batch_results = await asyncio.gather(*tasks, return_exceptions=True)
|
||||
|
||||
# Process results
|
||||
for result in batch_results:
|
||||
if isinstance(result, Exception):
|
||||
errors.append(str(result))
|
||||
else:
|
||||
results.append(result)
|
||||
|
||||
calculation_id += batch_size
|
||||
|
||||
# Very short interval
|
||||
await asyncio.sleep(calculation_interval)
|
||||
|
||||
except Exception as e:
|
||||
errors.append(str(e))
|
||||
break
|
||||
|
||||
# Run stress test
|
||||
await stress_worker()
|
||||
|
||||
end_time = time.time()
|
||||
actual_duration = end_time - start_time
|
||||
|
||||
# Analyze stress test results
|
||||
total_calculations = len(results)
|
||||
error_rate = len(errors) / (len(results) + len(errors)) if (len(results) + len(errors)) > 0 else 0
|
||||
throughput = total_calculations / actual_duration
|
||||
|
||||
print(f"Stress test results:")
|
||||
print(f" Duration: {actual_duration:.2f}s")
|
||||
print(f" Calculations: {total_calculations}")
|
||||
print(f" Errors: {len(errors)}")
|
||||
print(f" Error rate: {error_rate:.2%}")
|
||||
print(f" Throughput: {throughput:.2f} calc/s")
|
||||
|
||||
# Stress test assertions (more lenient than normal tests)
|
||||
assert error_rate < 0.05 # Error rate should be under 5%
|
||||
assert throughput > 20 # Should maintain reasonable throughput even under stress
|
||||
assert actual_duration >= stress_duration * 0.9 # Should run for most of the duration
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
pytest.main([__file__])
|
||||
74
tests/testing/test_simple_import.py
Executable file
74
tests/testing/test_simple_import.py
Executable file
@@ -0,0 +1,74 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Simple test for block import endpoint without transactions
|
||||
"""
|
||||
|
||||
import json
|
||||
import hashlib
|
||||
import requests
|
||||
|
||||
BASE_URL = "https://aitbc.bubuit.net/rpc"
|
||||
CHAIN_ID = "ait-devnet"
|
||||
|
||||
def compute_block_hash(height, parent_hash, timestamp):
|
||||
"""Compute block hash using the same algorithm as PoA proposer"""
|
||||
payload = f"{CHAIN_ID}|{height}|{parent_hash}|{timestamp}".encode()
|
||||
return "0x" + hashlib.sha256(payload).hexdigest()
|
||||
|
||||
def test_simple_block_import():
|
||||
"""Test importing a simple block without transactions"""
|
||||
|
||||
print("Testing Simple Block Import")
|
||||
print("=" * 40)
|
||||
|
||||
# Get current head
|
||||
response = requests.get(f"{BASE_URL}/head")
|
||||
head = response.json()
|
||||
print(f"Current head: height={head['height']}, hash={head['hash']}")
|
||||
|
||||
# Create a new block
|
||||
height = head["height"] + 1
|
||||
parent_hash = head["hash"]
|
||||
timestamp = "2026-01-29T10:20:00"
|
||||
block_hash = compute_block_hash(height, parent_hash, timestamp)
|
||||
|
||||
print(f"\nCreating test block:")
|
||||
print(f" height: {height}")
|
||||
print(f" parent_hash: {parent_hash}")
|
||||
print(f" hash: {block_hash}")
|
||||
|
||||
# Import the block
|
||||
response = requests.post(
|
||||
f"{BASE_URL}/blocks/import",
|
||||
json={
|
||||
"height": height,
|
||||
"hash": block_hash,
|
||||
"parent_hash": parent_hash,
|
||||
"proposer": "test-proposer",
|
||||
"timestamp": timestamp,
|
||||
"tx_count": 0
|
||||
}
|
||||
)
|
||||
|
||||
print(f"\nImport response:")
|
||||
print(f" Status: {response.status_code}")
|
||||
print(f" Body: {response.json()}")
|
||||
|
||||
if response.status_code == 200:
|
||||
print("\n✅ Block imported successfully!")
|
||||
|
||||
# Verify the block was imported
|
||||
response = requests.get(f"{BASE_URL}/blocks/{height}")
|
||||
if response.status_code == 200:
|
||||
imported = response.json()
|
||||
print(f"\n✅ Verified imported block:")
|
||||
print(f" height: {imported['height']}")
|
||||
print(f" hash: {imported['hash']}")
|
||||
print(f" proposer: {imported['proposer']}")
|
||||
else:
|
||||
print(f"\n❌ Could not retrieve imported block: {response.status_code}")
|
||||
else:
|
||||
print(f"\n❌ Import failed: {response.status_code}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
test_simple_block_import()
|
||||
77
tests/testing/test_transactions_display.py
Executable file
77
tests/testing/test_transactions_display.py
Executable file
@@ -0,0 +1,77 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test if transactions are displaying on the explorer
|
||||
"""
|
||||
|
||||
import requests
|
||||
from bs4 import BeautifulSoup
|
||||
|
||||
def main():
|
||||
print("🔍 Testing Transaction Display on Explorer")
|
||||
print("=" * 60)
|
||||
|
||||
# Check API has transactions
|
||||
print("\n1. Checking API for transactions...")
|
||||
try:
|
||||
response = requests.get("https://aitbc.bubuit.net/api/explorer/transactions")
|
||||
if response.status_code == 200:
|
||||
data = response.json()
|
||||
print(f"✅ API has {len(data['items'])} transactions")
|
||||
|
||||
if data['items']:
|
||||
first_tx = data['items'][0]
|
||||
print(f"\n First transaction:")
|
||||
print(f" Hash: {first_tx['hash']}")
|
||||
print(f" From: {first_tx['from']}")
|
||||
print(f" To: {first_tx.get('to', 'null')}")
|
||||
print(f" Value: {first_tx['value']}")
|
||||
print(f" Status: {first_tx['status']}")
|
||||
else:
|
||||
print(f"❌ API failed: {response.status_code}")
|
||||
return
|
||||
except Exception as e:
|
||||
print(f"❌ Error: {e}")
|
||||
return
|
||||
|
||||
# Check explorer page
|
||||
print("\n2. Checking explorer page...")
|
||||
try:
|
||||
response = requests.get("https://aitbc.bubuit.net/explorer/#/transactions")
|
||||
if response.status_code == 200:
|
||||
soup = BeautifulSoup(response.text, 'html.parser')
|
||||
|
||||
# Check if it says "mock data"
|
||||
if "mock data" in soup.text.lower():
|
||||
print("❌ Page still shows 'mock data' message")
|
||||
else:
|
||||
print("✅ No 'mock data' message found")
|
||||
|
||||
# Check for transactions table
|
||||
table = soup.find('tbody', {'id': 'transactions-table-body'})
|
||||
if table:
|
||||
rows = table.find_all('tr')
|
||||
if len(rows) > 0:
|
||||
if 'Loading' in rows[0].text:
|
||||
print("⏳ Still loading transactions...")
|
||||
elif 'No transactions' in rows[0].text:
|
||||
print("❌ No transactions displayed")
|
||||
else:
|
||||
print(f"✅ Found {len(rows)} transaction rows")
|
||||
else:
|
||||
print("❌ No transaction rows found")
|
||||
else:
|
||||
print("❌ Transactions table not found")
|
||||
else:
|
||||
print(f"❌ Failed to load page: {response.status_code}")
|
||||
except Exception as e:
|
||||
print(f"❌ Error: {e}")
|
||||
|
||||
print("\n" + "=" * 60)
|
||||
print("\n💡 If transactions aren't showing, it might be because:")
|
||||
print(" 1. JavaScript is still loading")
|
||||
print(" 2. The API call is failing")
|
||||
print(" 3. The transactions have empty values")
|
||||
print("\n Try refreshing the page or check browser console for errors")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
77
tests/testing/test_tx_import.py
Executable file
77
tests/testing/test_tx_import.py
Executable file
@@ -0,0 +1,77 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test transaction import specifically
|
||||
"""
|
||||
|
||||
import json
|
||||
import hashlib
|
||||
import requests
|
||||
|
||||
BASE_URL = "https://aitbc.bubuit.net/rpc"
|
||||
CHAIN_ID = "ait-devnet"
|
||||
|
||||
def compute_block_hash(height, parent_hash, timestamp):
|
||||
"""Compute block hash using the same algorithm as PoA proposer"""
|
||||
payload = f"{CHAIN_ID}|{height}|{parent_hash}|{timestamp}".encode()
|
||||
return "0x" + hashlib.sha256(payload).hexdigest()
|
||||
|
||||
def test_transaction_import():
|
||||
"""Test importing a block with a single transaction"""
|
||||
|
||||
print("Testing Transaction Import")
|
||||
print("=" * 40)
|
||||
|
||||
# Get current head
|
||||
response = requests.get(f"{BASE_URL}/head")
|
||||
head = response.json()
|
||||
print(f"Current head: height={head['height']}")
|
||||
|
||||
# Create a new block with one transaction
|
||||
height = head["height"] + 1
|
||||
parent_hash = head["hash"]
|
||||
timestamp = "2026-01-29T10:20:00"
|
||||
block_hash = compute_block_hash(height, parent_hash, timestamp)
|
||||
|
||||
test_block = {
|
||||
"height": height,
|
||||
"hash": block_hash,
|
||||
"parent_hash": parent_hash,
|
||||
"proposer": "test-proposer",
|
||||
"timestamp": timestamp,
|
||||
"tx_count": 1,
|
||||
"transactions": [{
|
||||
"tx_hash": "0xtx123456789",
|
||||
"sender": "0xsender123",
|
||||
"recipient": "0xreceiver456",
|
||||
"payload": {"to": "0xreceiver456", "amount": 1000000}
|
||||
}]
|
||||
}
|
||||
|
||||
print(f"\nTest block data:")
|
||||
print(json.dumps(test_block, indent=2))
|
||||
|
||||
# Import the block
|
||||
response = requests.post(
|
||||
f"{BASE_URL}/blocks/import",
|
||||
json=test_block
|
||||
)
|
||||
|
||||
print(f"\nImport response:")
|
||||
print(f" Status: {response.status_code}")
|
||||
print(f" Body: {response.json()}")
|
||||
|
||||
# Check logs
|
||||
print("\nChecking recent logs...")
|
||||
import subprocess
|
||||
result = subprocess.run(
|
||||
["ssh", "aitbc-cascade", "journalctl -u blockchain-node --since '30 seconds ago' | grep 'Importing transaction' | tail -1"],
|
||||
capture_output=True,
|
||||
text=True
|
||||
)
|
||||
if result.stdout:
|
||||
print(f"Log: {result.stdout.strip()}")
|
||||
else:
|
||||
print("No transaction import logs found")
|
||||
|
||||
if __name__ == "__main__":
|
||||
test_transaction_import()
|
||||
21
tests/testing/test_tx_model.py
Executable file
21
tests/testing/test_tx_model.py
Executable file
@@ -0,0 +1,21 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test the Transaction model directly
|
||||
"""
|
||||
|
||||
# Test creating a transaction model instance
|
||||
tx_data = {
|
||||
"tx_hash": "0xtest123",
|
||||
"sender": "0xsender",
|
||||
"recipient": "0xrecipient",
|
||||
"payload": {"test": "data"}
|
||||
}
|
||||
|
||||
print("Transaction data:")
|
||||
print(tx_data)
|
||||
|
||||
# Simulate what the router does
|
||||
print("\nExtracting fields:")
|
||||
print(f"tx_hash: {tx_data.get('tx_hash')}")
|
||||
print(f"sender: {tx_data.get('sender')}")
|
||||
print(f"recipient: {tx_data.get('recipient')}")
|
||||
91
tests/testing/verify_explorer_live.py
Executable file
91
tests/testing/verify_explorer_live.py
Executable file
@@ -0,0 +1,91 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Verify that the explorer is using live data instead of mock
|
||||
"""
|
||||
|
||||
import requests
|
||||
import json
|
||||
|
||||
def main():
|
||||
print("🔍 Verifying AITBC Explorer is using Live Data")
|
||||
print("=" * 60)
|
||||
|
||||
# Check API endpoint
|
||||
print("\n1. Testing API endpoint...")
|
||||
try:
|
||||
response = requests.get("https://aitbc.bubuit.net/api/explorer/blocks")
|
||||
if response.status_code == 200:
|
||||
data = response.json()
|
||||
print(f"✅ API is working - Found {len(data['items'])} blocks")
|
||||
|
||||
# Show latest block
|
||||
if data['items']:
|
||||
latest = data['items'][0]
|
||||
print(f"\n Latest Block:")
|
||||
print(f" Height: {latest['height']}")
|
||||
print(f" Hash: {latest['hash']}")
|
||||
print(f" Proposer: {latest['proposer']}")
|
||||
print(f" Time: {latest['timestamp']}")
|
||||
else:
|
||||
print(f"❌ API failed: {response.status_code}")
|
||||
return
|
||||
except Exception as e:
|
||||
print(f"❌ API error: {e}")
|
||||
return
|
||||
|
||||
# Check explorer page
|
||||
print("\n2. Checking explorer configuration...")
|
||||
|
||||
# Get the JS file
|
||||
try:
|
||||
js_response = requests.get("https://aitbc.bubuit.net/explorer/assets/index-IsD_hiHT.js")
|
||||
if js_response.status_code == 200:
|
||||
js_content = js_response.text
|
||||
|
||||
# Check for live data mode
|
||||
if 'dataMode:"live"' in js_content:
|
||||
print("✅ Explorer is configured for LIVE data")
|
||||
elif 'dataMode:"mock"' in js_content:
|
||||
print("❌ Explorer is still using MOCK data")
|
||||
return
|
||||
else:
|
||||
print("⚠️ Could not determine data mode")
|
||||
except Exception as e:
|
||||
print(f"❌ Error checking JS: {e}")
|
||||
|
||||
# Check other endpoints
|
||||
print("\n3. Testing other endpoints...")
|
||||
|
||||
endpoints = [
|
||||
("/api/explorer/transactions", "Transactions"),
|
||||
("/api/explorer/addresses", "Addresses"),
|
||||
("/api/explorer/receipts", "Receipts")
|
||||
]
|
||||
|
||||
for endpoint, name in endpoints:
|
||||
try:
|
||||
response = requests.get(f"https://aitbc.bubuit.net{endpoint}")
|
||||
if response.status_code == 200:
|
||||
data = response.json()
|
||||
print(f"✅ {name}: {len(data['items'])} items")
|
||||
else:
|
||||
print(f"❌ {name}: Failed ({response.status_code})")
|
||||
except Exception as e:
|
||||
print(f"❌ {name}: Error - {e}")
|
||||
|
||||
print("\n" + "=" * 60)
|
||||
print("✅ Explorer is successfully using LIVE data!")
|
||||
print("\n📊 Live Data Sources:")
|
||||
print(" • Blocks: https://aitbc.bubuit.net/api/explorer/blocks")
|
||||
print(" • Transactions: https://aitbc.bubuit.net/api/explorer/transactions")
|
||||
print(" • Addresses: https://aitbc.bubuit.net/api/explorer/addresses")
|
||||
print(" • Receipts: https://aitbc.bubuit.net/api/explorer/receipts")
|
||||
|
||||
print("\n💡 Visitors to https://aitbc.bubuit.net/explorer/ will now see:")
|
||||
print(" • Real blockchain data")
|
||||
print(" • Actual transactions")
|
||||
print(" • Live network activity")
|
||||
print(" • No mock/sample data")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
35
tests/testing/verify_gpu_deployment.sh
Executable file
35
tests/testing/verify_gpu_deployment.sh
Executable file
@@ -0,0 +1,35 @@
|
||||
#!/bin/bash
|
||||
# Simple verification of GPU deployment in container
|
||||
|
||||
echo "🔍 Checking GPU deployment in AITBC container..."
|
||||
|
||||
# Check if services exist
|
||||
echo "1. Checking if services are installed..."
|
||||
if ssh aitbc 'systemctl list-unit-files | grep -E "aitbc-gpu" 2>/dev/null'; then
|
||||
echo "✅ GPU services found"
|
||||
else
|
||||
echo "❌ GPU services not found - need to deploy first"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check service status
|
||||
echo -e "\n2. Checking service status..."
|
||||
ssh aitbc 'sudo systemctl status aitbc-gpu-registry.service --no-pager --lines=3'
|
||||
ssh aitbc 'sudo systemctl status aitbc-gpu-miner.service --no-pager --lines=3'
|
||||
|
||||
# Check if ports are listening
|
||||
echo -e "\n3. Checking if GPU registry is listening..."
|
||||
if ssh aitbc 'ss -tlnp | grep :8091 2>/dev/null'; then
|
||||
echo "✅ GPU registry listening on port 8091"
|
||||
else
|
||||
echo "❌ GPU registry not listening"
|
||||
fi
|
||||
|
||||
# Check GPU registration
|
||||
echo -e "\n4. Checking GPU registration from container..."
|
||||
ssh aitbc 'curl -s http://127.0.0.1:8091/miners/list 2>/dev/null | python3 -c "import sys,json; data=json.load(sys.stdin); print(f\"Found {len(data.get(\"gpus\", []))} GPU(s)\")" 2>/dev/null || echo "Failed to get GPU list"'
|
||||
|
||||
echo -e "\n5. Checking from host (10.1.223.93)..."
|
||||
curl -s http://10.1.223.93:8091/miners/list 2>/dev/null | python3 -c "import sys,json; data=json.load(sys.stdin); print(f\"✅ From host: Found {len(data.get(\"gpus\", []))} GPU(s)\")" 2>/dev/null || echo "❌ Cannot access from host"
|
||||
|
||||
echo -e "\n✅ Verification complete!"
|
||||
84
tests/testing/verify_toggle_removed.py
Executable file
84
tests/testing/verify_toggle_removed.py
Executable file
@@ -0,0 +1,84 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Verify that the data mode toggle button is removed from the explorer
|
||||
"""
|
||||
|
||||
import requests
|
||||
import re
|
||||
|
||||
def main():
|
||||
print("🔍 Verifying Data Mode Toggle is Removed")
|
||||
print("=" * 60)
|
||||
|
||||
# Get the explorer page
|
||||
print("\n1. Checking explorer page...")
|
||||
try:
|
||||
response = requests.get("https://aitbc.bubuit.net/explorer/")
|
||||
if response.status_code == 200:
|
||||
print("✅ Explorer page loaded")
|
||||
else:
|
||||
print(f"❌ Failed to load page: {response.status_code}")
|
||||
return
|
||||
except Exception as e:
|
||||
print(f"❌ Error: {e}")
|
||||
return
|
||||
|
||||
# Check for data mode toggle elements
|
||||
print("\n2. Checking for data mode toggle...")
|
||||
|
||||
html_content = response.text
|
||||
|
||||
# Check for toggle button
|
||||
if 'dataModeBtn' in html_content:
|
||||
print("❌ Data mode toggle button still present!")
|
||||
return
|
||||
else:
|
||||
print("✅ Data mode toggle button removed")
|
||||
|
||||
# Check for mode-button class
|
||||
if 'mode-button' in html_content:
|
||||
print("❌ Mode button class still found!")
|
||||
return
|
||||
else:
|
||||
print("✅ Mode button class removed")
|
||||
|
||||
# Check for data-mode-toggle
|
||||
if 'data-mode-toggle' in html_content:
|
||||
print("❌ Data mode toggle component still present!")
|
||||
return
|
||||
else:
|
||||
print("✅ Data mode toggle component removed")
|
||||
|
||||
# Check JS file
|
||||
print("\n3. Checking JavaScript file...")
|
||||
try:
|
||||
js_response = requests.get("https://aitbc.bubuit.net/explorer/assets/index-7nlLaz1v.js")
|
||||
if js_response.status_code == 200:
|
||||
js_content = js_response.text
|
||||
|
||||
if 'initDataModeToggle' in js_content:
|
||||
print("❌ Data mode toggle initialization still in JS!")
|
||||
return
|
||||
else:
|
||||
print("✅ Data mode toggle initialization removed")
|
||||
|
||||
if 'dataMode:"mock"' in js_content:
|
||||
print("❌ Mock data mode still configured!")
|
||||
return
|
||||
elif 'dataMode:"live"' in js_content:
|
||||
print("✅ Live data mode confirmed")
|
||||
else:
|
||||
print(f"❌ Failed to load JS: {js_response.status_code}")
|
||||
except Exception as e:
|
||||
print(f"❌ Error checking JS: {e}")
|
||||
|
||||
print("\n" + "=" * 60)
|
||||
print("✅ Data mode toggle successfully removed!")
|
||||
print("\n🎉 The explorer now:")
|
||||
print(" • Uses live data only")
|
||||
print(" • Has no mock/live toggle button")
|
||||
print(" • Shows real blockchain data")
|
||||
print(" • Is cleaner and more professional")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
65
tests/testing/verify_transactions_fixed.py
Executable file
65
tests/testing/verify_transactions_fixed.py
Executable file
@@ -0,0 +1,65 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Verify that transactions are now showing properly on the explorer
|
||||
"""
|
||||
|
||||
import requests
|
||||
|
||||
def main():
|
||||
print("🔍 Verifying Transactions Display on AITBC Explorer")
|
||||
print("=" * 60)
|
||||
|
||||
# Check API
|
||||
print("\n1. API Check:")
|
||||
try:
|
||||
response = requests.get("https://aitbc.bubuit.net/api/explorer/transactions")
|
||||
if response.status_code == 200:
|
||||
data = response.json()
|
||||
print(f" ✅ API returns {len(data['items'])} transactions")
|
||||
|
||||
# Count by status
|
||||
status_counts = {}
|
||||
for tx in data['items']:
|
||||
status = tx['status']
|
||||
status_counts[status] = status_counts.get(status, 0) + 1
|
||||
|
||||
print(f"\n Transaction Status Breakdown:")
|
||||
for status, count in status_counts.items():
|
||||
print(f" • {status}: {count}")
|
||||
else:
|
||||
print(f" ❌ API failed: {response.status_code}")
|
||||
except Exception as e:
|
||||
print(f" ❌ Error: {e}")
|
||||
|
||||
# Check main explorer page
|
||||
print("\n2. Main Page Check:")
|
||||
print(" Visit: https://aitbc.bubuit.net/explorer/")
|
||||
print(" ✅ Overview page now shows:")
|
||||
print(" • Real-time network statistics")
|
||||
print(" • Total transactions count")
|
||||
print(" • Completed/Running transactions")
|
||||
|
||||
# Check transactions page
|
||||
print("\n3. Transactions Page Check:")
|
||||
print(" Visit: https://aitbc.bubuit.net/explorer/#/transactions")
|
||||
print(" ✅ Now shows:")
|
||||
print(" • 'Latest transactions on the AITBC network'")
|
||||
print(" • No 'mock data' references")
|
||||
print(" • Real transaction data from API")
|
||||
|
||||
print("\n" + "=" * 60)
|
||||
print("✅ All mock data references removed!")
|
||||
print("\n📊 What's now displayed:")
|
||||
print(" • Real blocks with actual job IDs")
|
||||
print(" • Live transactions from clients")
|
||||
print(" • Network statistics")
|
||||
print(" • Professional, production-ready interface")
|
||||
|
||||
print("\n💡 Note: Most transactions show:")
|
||||
print(" • From: ${CLIENT_API_KEY}")
|
||||
print(" • To: null (not assigned to miner yet)")
|
||||
print(" • Value: 0 (cost shown when completed)")
|
||||
print(" • Status: Queued/Running/Expired")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
64
tests/testing/verify_windsurf_tests.py
Executable file
64
tests/testing/verify_windsurf_tests.py
Executable file
@@ -0,0 +1,64 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Verify Windsurf test integration is working properly
|
||||
"""
|
||||
|
||||
import subprocess
|
||||
import sys
|
||||
import os
|
||||
|
||||
def run_command(cmd, description):
|
||||
"""Run a command and return success status"""
|
||||
print(f"\n{'='*60}")
|
||||
print(f"Testing: {description}")
|
||||
print(f"Command: {cmd}")
|
||||
print('='*60)
|
||||
|
||||
result = subprocess.run(cmd, shell=True, capture_output=True, text=True)
|
||||
|
||||
if result.stdout:
|
||||
print("STDOUT:")
|
||||
print(result.stdout)
|
||||
|
||||
if result.stderr:
|
||||
print("STDERR:")
|
||||
print(result.stderr)
|
||||
|
||||
return result.returncode == 0
|
||||
|
||||
def main():
|
||||
print("🔍 Verifying Windsurf Test Integration")
|
||||
print("=" * 60)
|
||||
|
||||
# Change to project directory
|
||||
os.chdir('/home/oib/windsurf/aitbc')
|
||||
|
||||
tests = [
|
||||
("pytest --collect-only tests/test_windsurf_integration.py", "Test Discovery"),
|
||||
("pytest tests/test_windsurf_integration.py -v", "Run Simple Tests"),
|
||||
("pytest --collect-only tests/ -q --no-cov", "Collect All Tests (without imports)"),
|
||||
]
|
||||
|
||||
all_passed = True
|
||||
|
||||
for cmd, desc in tests:
|
||||
if not run_command(cmd, desc):
|
||||
all_passed = False
|
||||
print(f"❌ Failed: {desc}")
|
||||
else:
|
||||
print(f"✅ Passed: {desc}")
|
||||
|
||||
print("\n" + "=" * 60)
|
||||
if all_passed:
|
||||
print("✅ All tests passed! Windsurf integration is working.")
|
||||
print("\nTo use in Windsurf:")
|
||||
print("1. Open the Testing panel (beaker icon)")
|
||||
print("2. Tests should be automatically discovered")
|
||||
print("3. Click play button to run tests")
|
||||
print("4. Use F5 to debug tests")
|
||||
else:
|
||||
print("❌ Some tests failed. Check the output above.")
|
||||
sys.exit(1)
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
Reference in New Issue
Block a user