feat: add marketplace metrics, privacy features, and service registry endpoints
- Add Prometheus metrics for marketplace API throughput and error rates with new dashboard panels - Implement confidential transaction models with encryption support and access control - Add key management system with registration, rotation, and audit logging - Create services and registry routers for service discovery and management - Integrate ZK proof generation for privacy-preserving receipts - Add metrics instru
This commit is contained in:
558
tests/README.md
Normal file
558
tests/README.md
Normal file
@ -0,0 +1,558 @@
|
||||
# AITBC Test Suite
|
||||
|
||||
This directory contains the comprehensive test suite for the AITBC platform, including unit tests, integration tests, end-to-end tests, security tests, and load tests.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [Test Structure](#test-structure)
|
||||
2. [Prerequisites](#prerequisites)
|
||||
3. [Running Tests](#running-tests)
|
||||
4. [Test Types](#test-types)
|
||||
5. [Configuration](#configuration)
|
||||
6. [CI/CD Integration](#cicd-integration)
|
||||
7. [Troubleshooting](#troubleshooting)
|
||||
|
||||
## Test Structure
|
||||
|
||||
```
|
||||
tests/
|
||||
├── conftest.py # Shared fixtures and configuration
|
||||
├── pytest.ini # Pytest configuration
|
||||
├── README.md # This file
|
||||
├── unit/ # Unit tests
|
||||
│ └── test_coordinator_api.py
|
||||
├── integration/ # Integration tests
|
||||
│ └── test_blockchain_node.py
|
||||
├── e2e/ # End-to-end tests
|
||||
│ └── test_wallet_daemon.py
|
||||
├── security/ # Security tests
|
||||
│ └── test_confidential_transactions.py
|
||||
├── load/ # Load tests
|
||||
│ └── locustfile.py
|
||||
└── fixtures/ # Test data and fixtures
|
||||
├── sample_receipts.json
|
||||
└── test_transactions.json
|
||||
```
|
||||
|
||||
## Prerequisites
|
||||
|
||||
### Required Dependencies
|
||||
|
||||
```bash
|
||||
# Core testing framework
|
||||
pip install pytest pytest-asyncio pytest-cov pytest-mock pytest-xdist
|
||||
|
||||
# Security testing
|
||||
pip install bandit safety
|
||||
|
||||
# Load testing
|
||||
pip install locust
|
||||
|
||||
# Additional testing tools
|
||||
pip install requests-mock websockets psutil
|
||||
```
|
||||
|
||||
### System Dependencies
|
||||
|
||||
```bash
|
||||
# Ubuntu/Debian
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y postgresql redis-server
|
||||
|
||||
# macOS
|
||||
brew install postgresql redis
|
||||
|
||||
# Docker (for isolated testing)
|
||||
docker --version
|
||||
```
|
||||
|
||||
### Environment Setup
|
||||
|
||||
1. Clone the repository:
|
||||
```bash
|
||||
git clone https://github.com/aitbc/aitbc.git
|
||||
cd aitbc
|
||||
```
|
||||
|
||||
2. Create virtual environment:
|
||||
```bash
|
||||
python -m venv venv
|
||||
source venv/bin/activate # On Windows: venv\Scripts\activate
|
||||
```
|
||||
|
||||
3. Install dependencies:
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
pip install -r requirements-test.txt
|
||||
```
|
||||
|
||||
4. Set up test databases:
|
||||
```bash
|
||||
# PostgreSQL
|
||||
createdb aitbc_test
|
||||
|
||||
# Redis (use test database 1)
|
||||
redis-cli -n 1 FLUSHDB
|
||||
```
|
||||
|
||||
5. Environment variables:
|
||||
```bash
|
||||
export DATABASE_URL="postgresql://localhost/aitbc_test"
|
||||
export REDIS_URL="redis://localhost:6379/1"
|
||||
export TEST_MODE="true"
|
||||
```
|
||||
|
||||
## Running Tests
|
||||
|
||||
### Basic Commands
|
||||
|
||||
```bash
|
||||
# Run all tests
|
||||
pytest
|
||||
|
||||
# Run with coverage
|
||||
pytest --cov=apps --cov=packages
|
||||
|
||||
# Run specific test file
|
||||
pytest tests/unit/test_coordinator_api.py
|
||||
|
||||
# Run specific test class
|
||||
pytest tests/unit/test_coordinator_api.py::TestJobEndpoints
|
||||
|
||||
# Run specific test method
|
||||
pytest tests/unit/test_coordinator_api.py::TestJobEndpoints::test_create_job_success
|
||||
```
|
||||
|
||||
### Running by Test Type
|
||||
|
||||
```bash
|
||||
# Unit tests only (fast)
|
||||
pytest -m unit
|
||||
|
||||
# Integration tests (require services)
|
||||
pytest -m integration
|
||||
|
||||
# End-to-end tests (full system)
|
||||
pytest -m e2e
|
||||
|
||||
# Security tests
|
||||
pytest -m security
|
||||
|
||||
# Load tests (requires Locust)
|
||||
locust -f tests/load/locustfile.py
|
||||
|
||||
# Performance tests
|
||||
pytest -m performance
|
||||
|
||||
# GPU tests (requires GPU)
|
||||
pytest -m gpu
|
||||
```
|
||||
|
||||
### Parallel Execution
|
||||
|
||||
```bash
|
||||
# Run with multiple workers
|
||||
pytest -n auto
|
||||
|
||||
# Specify number of workers
|
||||
pytest -n 4
|
||||
|
||||
# Distribute by test file
|
||||
pytest --dist=loadfile
|
||||
```
|
||||
|
||||
### Filtering Tests
|
||||
|
||||
```bash
|
||||
# Run tests matching pattern
|
||||
pytest -k "test_create_job"
|
||||
|
||||
# Run tests not matching pattern
|
||||
pytest -k "not slow"
|
||||
|
||||
# Run tests with multiple markers
|
||||
pytest -m "unit and not slow"
|
||||
|
||||
# Run tests with any of multiple markers
|
||||
pytest -m "unit or integration"
|
||||
```
|
||||
|
||||
## Test Types
|
||||
|
||||
### Unit Tests (`tests/unit/`)
|
||||
|
||||
Fast, isolated tests that test individual components:
|
||||
|
||||
- **Purpose**: Test individual functions and classes
|
||||
- **Speed**: < 1 second per test
|
||||
- **Dependencies**: Mocked external services
|
||||
- **Database**: In-memory SQLite
|
||||
- **Examples**:
|
||||
```bash
|
||||
pytest tests/unit/ -v
|
||||
```
|
||||
|
||||
### Integration Tests (`tests/integration/`)
|
||||
|
||||
Tests that verify multiple components work together:
|
||||
|
||||
- **Purpose**: Test component interactions
|
||||
- **Speed**: 1-10 seconds per test
|
||||
- **Dependencies**: Real services required
|
||||
- **Database**: Test PostgreSQL instance
|
||||
- **Examples**:
|
||||
```bash
|
||||
# Start required services first
|
||||
docker-compose up -d postgres redis
|
||||
|
||||
# Run integration tests
|
||||
pytest tests/integration/ -v
|
||||
```
|
||||
|
||||
### End-to-End Tests (`tests/e2e/`)
|
||||
|
||||
Full system tests that simulate real user workflows:
|
||||
|
||||
- **Purpose**: Test complete user journeys
|
||||
- **Speed**: 10-60 seconds per test
|
||||
- **Dependencies**: Full system running
|
||||
- **Database**: Production-like setup
|
||||
- **Examples**:
|
||||
```bash
|
||||
# Start full system
|
||||
docker-compose up -d
|
||||
|
||||
# Run E2E tests
|
||||
pytest tests/e2e/ -v -s
|
||||
```
|
||||
|
||||
### Security Tests (`tests/security/`)
|
||||
|
||||
Tests that verify security properties and vulnerability resistance:
|
||||
|
||||
- **Purpose**: Test security controls
|
||||
- **Speed**: Variable (some are slow)
|
||||
- **Dependencies**: May require special setup
|
||||
- **Tools**: Bandit, Safety, Custom security tests
|
||||
- **Examples**:
|
||||
```bash
|
||||
# Run security scanner
|
||||
bandit -r apps/ -f json -o bandit-report.json
|
||||
|
||||
# Run security tests
|
||||
pytest tests/security/ -v
|
||||
```
|
||||
|
||||
### Load Tests (`tests/load/`)
|
||||
|
||||
Performance and scalability tests:
|
||||
|
||||
- **Purpose**: Test system under load
|
||||
- **Speed**: Long-running (minutes)
|
||||
- **Dependencies**: Locust, staging environment
|
||||
- **Examples**:
|
||||
```bash
|
||||
# Run Locust web UI
|
||||
locust -f tests/load/locustfile.py --web-host 127.0.0.1
|
||||
|
||||
# Run headless
|
||||
locust -f tests/load/locustfile.py --headless -u 100 -r 10 -t 5m
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
### Pytest Configuration (`pytest.ini`)
|
||||
|
||||
Key configuration options:
|
||||
|
||||
```ini
|
||||
[tool:pytest]
|
||||
# Test paths
|
||||
testpaths = tests
|
||||
python_files = test_*.py
|
||||
|
||||
# Coverage settings
|
||||
addopts = --cov=apps --cov=packages --cov-report=html
|
||||
|
||||
# Markers
|
||||
markers =
|
||||
unit: Unit tests
|
||||
integration: Integration tests
|
||||
e2e: End-to-end tests
|
||||
security: Security tests
|
||||
slow: Slow tests
|
||||
```
|
||||
|
||||
### Environment Variables
|
||||
|
||||
```bash
|
||||
# Test configuration
|
||||
export TEST_MODE=true
|
||||
export TEST_DATABASE_URL="postgresql://localhost/aitbc_test"
|
||||
export TEST_REDIS_URL="redis://localhost:6379/1"
|
||||
|
||||
# Service URLs for integration tests
|
||||
export COORDINATOR_URL="http://localhost:8001"
|
||||
export WALLET_URL="http://localhost:8002"
|
||||
export BLOCKCHAIN_URL="http://localhost:8545"
|
||||
|
||||
# Security test configuration
|
||||
export TEST_HSM_ENDPOINT="http://localhost:9999"
|
||||
export TEST_ZK_CIRCUITS_PATH="./apps/zk-circuits"
|
||||
```
|
||||
|
||||
### Test Data Management
|
||||
|
||||
```python
|
||||
# Using fixtures in conftest.py
|
||||
@pytest.fixture
|
||||
def test_data():
|
||||
return {
|
||||
"sample_job": {...},
|
||||
"sample_receipt": {...},
|
||||
}
|
||||
|
||||
# Custom test configuration
|
||||
@pytest.fixture(scope="session")
|
||||
def test_config():
|
||||
return TestConfig(
|
||||
database_url="sqlite:///:memory:",
|
||||
redis_url="redis://localhost:6379/1",
|
||||
)
|
||||
```
|
||||
|
||||
## CI/CD Integration
|
||||
|
||||
### GitHub Actions Example
|
||||
|
||||
```yaml
|
||||
name: Tests
|
||||
|
||||
on: [push, pull_request]
|
||||
|
||||
jobs:
|
||||
unit-tests:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- uses: actions/setup-python@v4
|
||||
with:
|
||||
python: "3.11"
|
||||
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
pip install -r requirements.txt
|
||||
pip install -r requirements-test.txt
|
||||
|
||||
- name: Run unit tests
|
||||
run: |
|
||||
pytest tests/unit/ -v --cov=apps --cov-report=xml
|
||||
|
||||
- name: Upload coverage
|
||||
uses: codecov/codecov-action@v3
|
||||
with:
|
||||
file: ./coverage.xml
|
||||
|
||||
integration-tests:
|
||||
runs-on: ubuntu-latest
|
||||
services:
|
||||
postgres:
|
||||
image: postgres:15
|
||||
env:
|
||||
POSTGRES_PASSWORD: postgres
|
||||
options: >-
|
||||
--health-cmd pg_isready
|
||||
--health-interval 10s
|
||||
--health-timeout 5s
|
||||
--health-retries 5
|
||||
|
||||
redis:
|
||||
image: redis:7
|
||||
options: >-
|
||||
--health-cmd "redis-cli ping"
|
||||
--health-interval 10s
|
||||
--health-timeout 5s
|
||||
--health-retries 5
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- uses: actions/setup-python@v4
|
||||
with:
|
||||
python: "3.11"
|
||||
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
pip install -r requirements.txt
|
||||
pip install -r requirements-test.txt
|
||||
|
||||
- name: Run integration tests
|
||||
run: |
|
||||
pytest tests/integration/ -v
|
||||
env:
|
||||
DATABASE_URL: postgresql://postgres:postgres@localhost/postgres
|
||||
REDIS_URL: redis://localhost:6379/0
|
||||
```
|
||||
|
||||
### Docker Compose for Testing
|
||||
|
||||
```yaml
|
||||
# docker-compose.test.yml
|
||||
version: '3.8'
|
||||
|
||||
services:
|
||||
postgres:
|
||||
image: postgres:15
|
||||
environment:
|
||||
POSTGRES_DB: aitbc_test
|
||||
POSTGRES_USER: test
|
||||
POSTGRES_PASSWORD: test
|
||||
ports:
|
||||
- "5433:5432"
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -U test"]
|
||||
interval: 5s
|
||||
timeout: 5s
|
||||
retries: 5
|
||||
|
||||
redis:
|
||||
image: redis:7-alpine
|
||||
ports:
|
||||
- "6380:6379"
|
||||
healthcheck:
|
||||
test: ["CMD", "redis-cli", "ping"]
|
||||
interval: 5s
|
||||
timeout: 5s
|
||||
retries: 5
|
||||
|
||||
coordinator:
|
||||
build: ./apps/coordinator-api
|
||||
environment:
|
||||
DATABASE_URL: postgresql://test:test@postgres:5432/aitbc_test
|
||||
REDIS_URL: redis://redis:6379/0
|
||||
depends_on:
|
||||
postgres:
|
||||
condition: service_healthy
|
||||
redis:
|
||||
condition: service_healthy
|
||||
ports:
|
||||
- "8001:8000"
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
1. **Import Errors**
|
||||
```bash
|
||||
# Ensure PYTHONPATH is set
|
||||
export PYTHONPATH="${PYTHONPATH}:$(pwd)"
|
||||
|
||||
# Or install in development mode
|
||||
pip install -e .
|
||||
```
|
||||
|
||||
2. **Database Connection Errors**
|
||||
```bash
|
||||
# Check if PostgreSQL is running
|
||||
pg_isready -h localhost -p 5432
|
||||
|
||||
# Create test database
|
||||
createdb -h localhost -p 5432 aitbc_test
|
||||
```
|
||||
|
||||
3. **Redis Connection Errors**
|
||||
```bash
|
||||
# Check if Redis is running
|
||||
redis-cli ping
|
||||
|
||||
# Use correct database
|
||||
redis-cli -n 1 FLUSHDB
|
||||
```
|
||||
|
||||
4. **Test Timeouts**
|
||||
```bash
|
||||
# Increase timeout for slow tests
|
||||
pytest --timeout=600
|
||||
|
||||
# Run tests sequentially
|
||||
pytest -n 0
|
||||
```
|
||||
|
||||
5. **Port Conflicts**
|
||||
```bash
|
||||
# Kill processes using ports
|
||||
lsof -ti:8001 | xargs kill -9
|
||||
lsof -ti:8002 | xargs kill -9
|
||||
```
|
||||
|
||||
### Debugging Tests
|
||||
|
||||
```bash
|
||||
# Run with verbose output
|
||||
pytest -v -s
|
||||
|
||||
# Stop on first failure
|
||||
pytest -x
|
||||
|
||||
# Run with pdb on failure
|
||||
pytest --pdb
|
||||
|
||||
# Print local variables on failure
|
||||
pytest --tb=long
|
||||
|
||||
# Run specific test with debugging
|
||||
pytest tests/unit/test_coordinator_api.py::TestJobEndpoints::test_create_job_success -v -s --pdb
|
||||
```
|
||||
|
||||
### Performance Issues
|
||||
|
||||
```bash
|
||||
# Profile test execution
|
||||
pytest --profile
|
||||
|
||||
# Find slowest tests
|
||||
pytest --durations=10
|
||||
|
||||
# Run with memory profiling
|
||||
pytest --memprof
|
||||
```
|
||||
|
||||
### Test Data Issues
|
||||
|
||||
```bash
|
||||
# Clean test database
|
||||
psql -h localhost -U test -d aitbc_test -c "DROP SCHEMA public CASCADE; CREATE SCHEMA public;"
|
||||
|
||||
# Reset Redis
|
||||
redis-cli -n 1 FLUSHALL
|
||||
|
||||
# Regenerate test fixtures
|
||||
python tests/generate_fixtures.py
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Write Isolated Tests**: Each test should be independent
|
||||
2. **Use Descriptive Names**: Test names should describe what they test
|
||||
3. **Mock External Dependencies**: Use mocks for external services
|
||||
4. **Clean Up Resources**: Use fixtures for setup/teardown
|
||||
5. **Test Edge Cases**: Don't just test happy paths
|
||||
6. **Use Type Hints**: Makes tests more maintainable
|
||||
7. **Document Complex Tests**: Add comments for complex logic
|
||||
|
||||
## Contributing
|
||||
|
||||
When adding new tests:
|
||||
|
||||
1. Follow the existing structure and naming conventions
|
||||
2. Add appropriate markers (`@pytest.mark.unit`, etc.)
|
||||
3. Update this README if adding new test types
|
||||
4. Ensure tests pass on CI before submitting PR
|
||||
5. Add coverage for new features
|
||||
|
||||
## Resources
|
||||
|
||||
- [Pytest Documentation](https://docs.pytest.org/)
|
||||
- [Locust Documentation](https://docs.locust.io/)
|
||||
- [Security Testing Guide](https://owasp.org/www-project-security-testing-guide/)
|
||||
- [Load Testing Best Practices](https://docs.locust.io/en/stable/writing-a-locustfile.html)
|
||||
473
tests/conftest.py
Normal file
473
tests/conftest.py
Normal file
@ -0,0 +1,473 @@
|
||||
"""
|
||||
Shared test configuration and fixtures for AITBC
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import pytest
|
||||
import json
|
||||
import tempfile
|
||||
from datetime import datetime, timedelta
|
||||
from typing import Dict, Any, Generator, AsyncGenerator
|
||||
from unittest.mock import Mock, AsyncMock
|
||||
from sqlalchemy import create_engine, event
|
||||
from sqlalchemy.orm import sessionmaker, Session
|
||||
from sqlalchemy.pool import StaticPool
|
||||
from fastapi.testclient import TestClient
|
||||
import redis
|
||||
from cryptography.hazmat.primitives.asymmetric import ed25519
|
||||
from cryptography.hazmat.primitives import serialization
|
||||
|
||||
# Import AITBC modules
|
||||
from apps.coordinator_api.src.app.main import app as coordinator_app
|
||||
from apps.coordinator_api.src.app.database import get_db
|
||||
from apps.coordinator_api.src.app.models import Base
|
||||
from apps.coordinator_api.src.app.models.multitenant import Tenant, TenantUser, TenantQuota
|
||||
from apps.wallet_daemon.src.app.main import app as wallet_app
|
||||
from packages.py.aitbc_crypto import sign_receipt, verify_receipt
|
||||
from packages.py.aitbc_sdk import AITBCClient
|
||||
|
||||
|
||||
@pytest.fixture(scope="session")
|
||||
def event_loop():
|
||||
"""Create an instance of the default event loop for the test session."""
|
||||
loop = asyncio.get_event_loop_policy().new_event_loop()
|
||||
yield loop
|
||||
loop.close()
|
||||
|
||||
|
||||
@pytest.fixture(scope="session")
|
||||
def test_config():
|
||||
"""Test configuration settings."""
|
||||
return {
|
||||
"database_url": "sqlite:///:memory:",
|
||||
"redis_url": "redis://localhost:6379/1", # Use test DB
|
||||
"test_tenant_id": "test-tenant-123",
|
||||
"test_user_id": "test-user-456",
|
||||
"test_api_key": "test-api-key-789",
|
||||
"coordinator_url": "http://localhost:8001",
|
||||
"wallet_url": "http://localhost:8002",
|
||||
"blockchain_url": "http://localhost:8545",
|
||||
}
|
||||
|
||||
|
||||
@pytest.fixture(scope="session")
|
||||
def test_engine(test_config):
|
||||
"""Create a test database engine."""
|
||||
engine = create_engine(
|
||||
test_config["database_url"],
|
||||
connect_args={"check_same_thread": False},
|
||||
poolclass=StaticPool,
|
||||
)
|
||||
Base.metadata.create_all(bind=engine)
|
||||
yield engine
|
||||
Base.metadata.drop_all(bind=engine)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def db_session(test_engine) -> Generator[Session, None, None]:
|
||||
"""Create a database session for testing."""
|
||||
connection = test_engine.connect()
|
||||
transaction = connection.begin()
|
||||
session = sessionmaker(autocommit=False, autoflush=False, bind=connection)()
|
||||
|
||||
# Begin a nested transaction
|
||||
nested = connection.begin_nested()
|
||||
|
||||
@event.listens_for(session, "after_transaction_end")
|
||||
def end_savepoint(session, transaction):
|
||||
"""Rollback to the savepoint after each test."""
|
||||
nonlocal nested
|
||||
if not nested.is_active:
|
||||
nested = connection.begin_nested()
|
||||
|
||||
yield session
|
||||
|
||||
# Rollback all changes
|
||||
session.close()
|
||||
transaction.rollback()
|
||||
connection.close()
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def test_redis():
|
||||
"""Create a test Redis client."""
|
||||
client = redis.Redis.from_url("redis://localhost:6379/1", decode_responses=True)
|
||||
# Clear test database
|
||||
client.flushdb()
|
||||
yield client
|
||||
client.flushdb()
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def coordinator_client(db_session):
|
||||
"""Create a test client for the coordinator API."""
|
||||
def override_get_db():
|
||||
yield db_session
|
||||
|
||||
coordinator_app.dependency_overrides[get_db] = override_get_db
|
||||
with TestClient(coordinator_app) as client:
|
||||
yield client
|
||||
coordinator_app.dependency_overrides.clear()
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def wallet_client():
|
||||
"""Create a test client for the wallet daemon."""
|
||||
with TestClient(wallet_app) as client:
|
||||
yield client
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_tenant(db_session):
|
||||
"""Create a sample tenant for testing."""
|
||||
tenant = Tenant(
|
||||
id="test-tenant-123",
|
||||
name="Test Tenant",
|
||||
status="active",
|
||||
created_at=datetime.utcnow(),
|
||||
updated_at=datetime.utcnow(),
|
||||
)
|
||||
db_session.add(tenant)
|
||||
db_session.commit()
|
||||
return tenant
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_tenant_user(db_session, sample_tenant):
|
||||
"""Create a sample tenant user for testing."""
|
||||
user = TenantUser(
|
||||
tenant_id=sample_tenant.id,
|
||||
user_id="test-user-456",
|
||||
role="admin",
|
||||
created_at=datetime.utcnow(),
|
||||
)
|
||||
db_session.add(user)
|
||||
db_session.commit()
|
||||
return user
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_tenant_quota(db_session, sample_tenant):
|
||||
"""Create sample tenant quota for testing."""
|
||||
quota = TenantQuota(
|
||||
tenant_id=sample_tenant.id,
|
||||
resource_type="api_calls",
|
||||
limit=10000,
|
||||
used=0,
|
||||
period="monthly",
|
||||
created_at=datetime.utcnow(),
|
||||
updated_at=datetime.utcnow(),
|
||||
)
|
||||
db_session.add(quota)
|
||||
db_session.commit()
|
||||
return quota
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_job_data():
|
||||
"""Sample job data for testing."""
|
||||
return {
|
||||
"job_type": "ai_inference",
|
||||
"parameters": {
|
||||
"model": "gpt-3.5-turbo",
|
||||
"prompt": "Test prompt",
|
||||
"max_tokens": 100,
|
||||
},
|
||||
"requirements": {
|
||||
"gpu_memory": "8GB",
|
||||
"compute_time": 30,
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_receipt_data():
|
||||
"""Sample receipt data for testing."""
|
||||
return {
|
||||
"job_id": "test-job-123",
|
||||
"miner_id": "test-miner-456",
|
||||
"coordinator_id": "test-coordinator-789",
|
||||
"timestamp": datetime.utcnow().isoformat(),
|
||||
"result": {
|
||||
"output": "Test output",
|
||||
"confidence": 0.95,
|
||||
"tokens_used": 50,
|
||||
},
|
||||
"signature": "test-signature",
|
||||
}
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def test_keypair():
|
||||
"""Generate a test Ed25519 keypair for signing."""
|
||||
private_key = ed25519.Ed25519PrivateKey.generate()
|
||||
public_key = private_key.public_key()
|
||||
return private_key, public_key
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def signed_receipt(sample_receipt_data, test_keypair):
|
||||
"""Create a signed receipt for testing."""
|
||||
private_key, public_key = test_keypair
|
||||
|
||||
# Serialize receipt without signature
|
||||
receipt_copy = sample_receipt_data.copy()
|
||||
receipt_copy.pop("signature", None)
|
||||
receipt_json = json.dumps(receipt_copy, sort_keys=True, separators=(',', ':'))
|
||||
|
||||
# Sign the receipt
|
||||
signature = private_key.sign(receipt_json.encode())
|
||||
|
||||
# Add signature to receipt
|
||||
receipt_copy["signature"] = signature.hex()
|
||||
receipt_copy["public_key"] = public_key.public_bytes(
|
||||
encoding=serialization.Encoding.Raw,
|
||||
format=serialization.PublicFormat.Raw
|
||||
).hex()
|
||||
|
||||
return receipt_copy
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def aitbc_client(test_config):
|
||||
"""Create an AITBC client for testing."""
|
||||
return AITBCClient(
|
||||
base_url=test_config["coordinator_url"],
|
||||
api_key=test_config["test_api_key"],
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_miner_service():
|
||||
"""Mock miner service for testing."""
|
||||
service = AsyncMock()
|
||||
service.register_miner = AsyncMock(return_value={"miner_id": "test-miner-456"})
|
||||
service.heartbeat = AsyncMock(return_value={"status": "active"})
|
||||
service.fetch_jobs = AsyncMock(return_value=[])
|
||||
service.submit_result = AsyncMock(return_value={"job_id": "test-job-123"})
|
||||
return service
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_blockchain_node():
|
||||
"""Mock blockchain node for testing."""
|
||||
node = AsyncMock()
|
||||
node.get_block = AsyncMock(return_value={"number": 100, "hash": "0x123"})
|
||||
node.get_transaction = AsyncMock(return_value={"hash": "0x456", "status": "confirmed"})
|
||||
node.submit_transaction = AsyncMock(return_value={"hash": "0x789", "status": "pending"})
|
||||
node.subscribe_blocks = AsyncMock()
|
||||
node.subscribe_transactions = AsyncMock()
|
||||
return node
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_gpu_service():
|
||||
"""Sample GPU service definition."""
|
||||
return {
|
||||
"id": "llm-inference",
|
||||
"name": "LLM Inference Service",
|
||||
"category": "ai_ml",
|
||||
"description": "Large language model inference",
|
||||
"requirements": {
|
||||
"gpu_memory": "16GB",
|
||||
"cuda_version": "11.8",
|
||||
"driver_version": "520.61.05",
|
||||
},
|
||||
"pricing": {
|
||||
"per_hour": 0.50,
|
||||
"per_token": 0.0001,
|
||||
},
|
||||
"capabilities": [
|
||||
"text-generation",
|
||||
"chat-completion",
|
||||
"embedding",
|
||||
],
|
||||
}
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_cross_chain_data():
|
||||
"""Sample cross-chain settlement data."""
|
||||
return {
|
||||
"source_chain": "ethereum",
|
||||
"target_chain": "polygon",
|
||||
"source_tx_hash": "0xabcdef123456",
|
||||
"target_address": "0x1234567890ab",
|
||||
"amount": "1000",
|
||||
"token": "USDC",
|
||||
"bridge_id": "layerzero",
|
||||
"nonce": 12345,
|
||||
}
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def confidential_transaction_data():
|
||||
"""Sample confidential transaction data."""
|
||||
return {
|
||||
"sender": "0x1234567890abcdef",
|
||||
"receiver": "0xfedcba0987654321",
|
||||
"amount": 1000,
|
||||
"asset": "AITBC",
|
||||
"confidential": True,
|
||||
"ciphertext": "encrypted_data_here",
|
||||
"viewing_key": "viewing_key_here",
|
||||
"proof": "zk_proof_here",
|
||||
}
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_hsm_client():
|
||||
"""Mock HSM client for testing."""
|
||||
client = AsyncMock()
|
||||
client.generate_key = AsyncMock(return_value={"key_id": "test-key-123"})
|
||||
client.sign_data = AsyncMock(return_value={"signature": "test-signature"})
|
||||
client.verify_signature = AsyncMock(return_value={"valid": True})
|
||||
client.encrypt_data = AsyncMock(return_value={"ciphertext": "encrypted_data"})
|
||||
client.decrypt_data = AsyncMock(return_value={"plaintext": "decrypted_data"})
|
||||
return client
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def temp_directory():
|
||||
"""Create a temporary directory for testing."""
|
||||
with tempfile.TemporaryDirectory() as temp_dir:
|
||||
yield temp_dir
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_config_file(temp_directory):
|
||||
"""Create a sample configuration file."""
|
||||
config = {
|
||||
"coordinator": {
|
||||
"host": "localhost",
|
||||
"port": 8001,
|
||||
"database_url": "sqlite:///test.db",
|
||||
},
|
||||
"blockchain": {
|
||||
"host": "localhost",
|
||||
"port": 8545,
|
||||
"chain_id": 1337,
|
||||
},
|
||||
"wallet": {
|
||||
"host": "localhost",
|
||||
"port": 8002,
|
||||
"keystore_path": temp_directory,
|
||||
},
|
||||
}
|
||||
|
||||
config_path = temp_directory / "config.json"
|
||||
with open(config_path, "w") as f:
|
||||
json.dump(config, f)
|
||||
|
||||
return config_path
|
||||
|
||||
|
||||
# Async fixtures
|
||||
|
||||
@pytest.fixture
|
||||
async def async_aitbc_client(test_config):
|
||||
"""Create an async AITBC client for testing."""
|
||||
client = AITBCClient(
|
||||
base_url=test_config["coordinator_url"],
|
||||
api_key=test_config["test_api_key"],
|
||||
)
|
||||
yield client
|
||||
await client.close()
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
async def websocket_client():
|
||||
"""Create a WebSocket client for testing."""
|
||||
import websockets
|
||||
|
||||
uri = "ws://localhost:8546"
|
||||
async with websockets.connect(uri) as websocket:
|
||||
yield websocket
|
||||
|
||||
|
||||
# Performance testing fixtures
|
||||
|
||||
@pytest.fixture
|
||||
def performance_config():
|
||||
"""Configuration for performance tests."""
|
||||
return {
|
||||
"concurrent_users": 100,
|
||||
"ramp_up_time": 30, # seconds
|
||||
"test_duration": 300, # seconds
|
||||
"think_time": 1, # seconds
|
||||
}
|
||||
|
||||
|
||||
# Security testing fixtures
|
||||
|
||||
@pytest.fixture
|
||||
def malicious_payloads():
|
||||
"""Collection of malicious payloads for security testing."""
|
||||
return {
|
||||
"sql_injection": "'; DROP TABLE jobs; --",
|
||||
"xss": "<script>alert('xss')</script>",
|
||||
"path_traversal": "../../../etc/passwd",
|
||||
"overflow": "A" * 10000,
|
||||
"unicode": "\ufeff\u200b\u200c\u200d",
|
||||
}
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def rate_limit_config():
|
||||
"""Rate limiting configuration for testing."""
|
||||
return {
|
||||
"requests_per_minute": 60,
|
||||
"burst_size": 10,
|
||||
"window_size": 60,
|
||||
}
|
||||
|
||||
|
||||
# Helper functions
|
||||
|
||||
def create_test_job(job_id: str = None, **kwargs) -> Dict[str, Any]:
|
||||
"""Create a test job with default values."""
|
||||
return {
|
||||
"id": job_id or f"test-job-{datetime.utcnow().timestamp()}",
|
||||
"status": "pending",
|
||||
"created_at": datetime.utcnow().isoformat(),
|
||||
"updated_at": datetime.utcnow().isoformat(),
|
||||
"job_type": kwargs.get("job_type", "ai_inference"),
|
||||
"parameters": kwargs.get("parameters", {}),
|
||||
"requirements": kwargs.get("requirements", {}),
|
||||
"tenant_id": kwargs.get("tenant_id", "test-tenant-123"),
|
||||
}
|
||||
|
||||
|
||||
def create_test_receipt(job_id: str = None, **kwargs) -> Dict[str, Any]:
|
||||
"""Create a test receipt with default values."""
|
||||
return {
|
||||
"id": f"receipt-{job_id or 'test'}",
|
||||
"job_id": job_id or "test-job-123",
|
||||
"miner_id": kwargs.get("miner_id", "test-miner-456"),
|
||||
"coordinator_id": kwargs.get("coordinator_id", "test-coordinator-789"),
|
||||
"timestamp": kwargs.get("timestamp", datetime.utcnow().isoformat()),
|
||||
"result": kwargs.get("result", {"output": "test"}),
|
||||
"signature": kwargs.get("signature", "test-signature"),
|
||||
}
|
||||
|
||||
|
||||
def assert_valid_receipt(receipt: Dict[str, Any]):
|
||||
"""Assert that a receipt has valid structure."""
|
||||
required_fields = ["id", "job_id", "miner_id", "coordinator_id", "timestamp", "result", "signature"]
|
||||
for field in required_fields:
|
||||
assert field in receipt, f"Receipt missing required field: {field}"
|
||||
|
||||
# Validate timestamp format
|
||||
assert isinstance(receipt["timestamp"], str), "Timestamp should be a string"
|
||||
|
||||
# Validate result structure
|
||||
assert isinstance(receipt["result"], dict), "Result should be a dictionary"
|
||||
|
||||
|
||||
# Marks for different test types
|
||||
pytest.mark.unit = pytest.mark.unit
|
||||
pytest.mark.integration = pytest.mark.integration
|
||||
pytest.mark.e2e = pytest.mark.e2e
|
||||
pytest.mark.performance = pytest.mark.performance
|
||||
pytest.mark.security = pytest.mark.security
|
||||
pytest.mark.slow = pytest.mark.slow
|
||||
625
tests/e2e/test_wallet_daemon.py
Normal file
625
tests/e2e/test_wallet_daemon.py
Normal file
@ -0,0 +1,625 @@
|
||||
"""
|
||||
End-to-end tests for AITBC Wallet Daemon
|
||||
"""
|
||||
|
||||
import pytest
|
||||
import asyncio
|
||||
import json
|
||||
import time
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
import requests
|
||||
from cryptography.hazmat.primitives.asymmetric import ed25519
|
||||
from cryptography.hazmat.primitives import serialization
|
||||
|
||||
from packages.py.aitbc_crypto import sign_receipt, verify_receipt
|
||||
from packages.py.aitbc_sdk import AITBCClient
|
||||
|
||||
|
||||
@pytest.mark.e2e
|
||||
class TestWalletDaemonE2E:
|
||||
"""End-to-end tests for wallet daemon functionality"""
|
||||
|
||||
@pytest.fixture
|
||||
def wallet_base_url(self):
|
||||
"""Wallet daemon base URL"""
|
||||
return "http://localhost:8002"
|
||||
|
||||
@pytest.fixture
|
||||
def coordinator_base_url(self):
|
||||
"""Coordinator API base URL"""
|
||||
return "http://localhost:8001"
|
||||
|
||||
@pytest.fixture
|
||||
def test_wallet_data(self, temp_directory):
|
||||
"""Create test wallet data"""
|
||||
wallet_path = Path(temp_directory) / "test_wallet.json"
|
||||
wallet_data = {
|
||||
"id": "test-wallet-123",
|
||||
"name": "Test Wallet",
|
||||
"created_at": datetime.utcnow().isoformat(),
|
||||
"accounts": [
|
||||
{
|
||||
"address": "0x1234567890abcdef",
|
||||
"public_key": "test-public-key",
|
||||
"encrypted_private_key": "encrypted-key-here",
|
||||
}
|
||||
],
|
||||
}
|
||||
|
||||
with open(wallet_path, "w") as f:
|
||||
json.dump(wallet_data, f)
|
||||
|
||||
return wallet_path
|
||||
|
||||
def test_wallet_creation_flow(self, wallet_base_url, temp_directory):
|
||||
"""Test complete wallet creation flow"""
|
||||
# Step 1: Create new wallet
|
||||
create_data = {
|
||||
"name": "E2E Test Wallet",
|
||||
"password": "test-password-123",
|
||||
"keystore_path": str(temp_directory),
|
||||
}
|
||||
|
||||
response = requests.post(f"{wallet_base_url}/v1/wallets", json=create_data)
|
||||
assert response.status_code == 201
|
||||
|
||||
wallet = response.json()
|
||||
assert wallet["name"] == "E2E Test Wallet"
|
||||
assert "id" in wallet
|
||||
assert "accounts" in wallet
|
||||
assert len(wallet["accounts"]) == 1
|
||||
|
||||
account = wallet["accounts"][0]
|
||||
assert "address" in account
|
||||
assert "public_key" in account
|
||||
assert "encrypted_private_key" not in account # Should not be exposed
|
||||
|
||||
# Step 2: List wallets
|
||||
response = requests.get(f"{wallet_base_url}/v1/wallets")
|
||||
assert response.status_code == 200
|
||||
|
||||
wallets = response.json()
|
||||
assert any(w["id"] == wallet["id"] for w in wallets)
|
||||
|
||||
# Step 3: Get wallet details
|
||||
response = requests.get(f"{wallet_base_url}/v1/wallets/{wallet['id']}")
|
||||
assert response.status_code == 200
|
||||
|
||||
wallet_details = response.json()
|
||||
assert wallet_details["id"] == wallet["id"]
|
||||
assert len(wallet_details["accounts"]) == 1
|
||||
|
||||
def test_wallet_unlock_flow(self, wallet_base_url, test_wallet_data):
|
||||
"""Test wallet unlock and session management"""
|
||||
# Step 1: Unlock wallet
|
||||
unlock_data = {
|
||||
"password": "test-password-123",
|
||||
"keystore_path": str(test_wallet_data),
|
||||
}
|
||||
|
||||
response = requests.post(f"{wallet_base_url}/v1/wallets/unlock", json=unlock_data)
|
||||
assert response.status_code == 200
|
||||
|
||||
unlock_result = response.json()
|
||||
assert "session_token" in unlock_result
|
||||
assert "expires_at" in unlock_result
|
||||
|
||||
session_token = unlock_result["session_token"]
|
||||
|
||||
# Step 2: Use session for signing
|
||||
headers = {"Authorization": f"Bearer {session_token}"}
|
||||
|
||||
sign_data = {
|
||||
"message": "Test message to sign",
|
||||
"account_address": "0x1234567890abcdef",
|
||||
}
|
||||
|
||||
response = requests.post(
|
||||
f"{wallet_base_url}/v1/sign",
|
||||
json=sign_data,
|
||||
headers=headers
|
||||
)
|
||||
assert response.status_code == 200
|
||||
|
||||
signature = response.json()
|
||||
assert "signature" in signature
|
||||
assert "public_key" in signature
|
||||
|
||||
# Step 3: Lock wallet
|
||||
response = requests.post(
|
||||
f"{wallet_base_url}/v1/wallets/lock",
|
||||
headers=headers
|
||||
)
|
||||
assert response.status_code == 200
|
||||
|
||||
# Step 4: Verify session is invalid
|
||||
response = requests.post(
|
||||
f"{wallet_base_url}/v1/sign",
|
||||
json=sign_data,
|
||||
headers=headers
|
||||
)
|
||||
assert response.status_code == 401
|
||||
|
||||
def test_receipt_verification_flow(self, wallet_base_url, coordinator_base_url, signed_receipt):
|
||||
"""Test receipt verification workflow"""
|
||||
# Step 1: Submit receipt to wallet for verification
|
||||
verify_data = {
|
||||
"receipt": signed_receipt,
|
||||
}
|
||||
|
||||
response = requests.post(
|
||||
f"{wallet_base_url}/v1/receipts/verify",
|
||||
json=verify_data
|
||||
)
|
||||
assert response.status_code == 200
|
||||
|
||||
verification = response.json()
|
||||
assert "valid" in verification
|
||||
assert verification["valid"] is True
|
||||
assert "verifications" in verification
|
||||
|
||||
# Check verification details
|
||||
verifications = verification["verifications"]
|
||||
assert "miner_signature" in verifications
|
||||
assert "coordinator_signature" in verifications
|
||||
assert verifications["miner_signature"]["valid"] is True
|
||||
assert verifications["coordinator_signature"]["valid"] is True
|
||||
|
||||
# Step 2: Get receipt history
|
||||
response = requests.get(f"{wallet_base_url}/v1/receipts")
|
||||
assert response.status_code == 200
|
||||
|
||||
receipts = response.json()
|
||||
assert len(receipts) > 0
|
||||
assert any(r["id"] == signed_receipt["id"] for r in receipts)
|
||||
|
||||
def test_cross_component_integration(self, wallet_base_url, coordinator_base_url):
|
||||
"""Test integration between wallet and coordinator"""
|
||||
# Step 1: Create job via coordinator
|
||||
job_data = {
|
||||
"job_type": "ai_inference",
|
||||
"parameters": {
|
||||
"model": "gpt-3.5-turbo",
|
||||
"prompt": "Test prompt",
|
||||
},
|
||||
}
|
||||
|
||||
response = requests.post(
|
||||
f"{coordinator_base_url}/v1/jobs",
|
||||
json=job_data,
|
||||
headers={"X-Tenant-ID": "test-tenant"}
|
||||
)
|
||||
assert response.status_code == 201
|
||||
|
||||
job = response.json()
|
||||
job_id = job["id"]
|
||||
|
||||
# Step 2: Mock job completion and receipt creation
|
||||
# In real test, this would involve actual miner execution
|
||||
receipt_data = {
|
||||
"id": f"receipt-{job_id}",
|
||||
"job_id": job_id,
|
||||
"miner_id": "test-miner",
|
||||
"coordinator_id": "test-coordinator",
|
||||
"timestamp": datetime.utcnow().isoformat(),
|
||||
"result": {"output": "Test result"},
|
||||
}
|
||||
|
||||
# Sign receipt
|
||||
private_key = ed25519.Ed25519PrivateKey.generate()
|
||||
receipt_json = json.dumps({k: v for k, v in receipt_data.items() if k != "signature"})
|
||||
signature = private_key.sign(receipt_json.encode())
|
||||
receipt_data["signature"] = signature.hex()
|
||||
|
||||
# Step 3: Submit receipt to coordinator
|
||||
response = requests.post(
|
||||
f"{coordinator_base_url}/v1/receipts",
|
||||
json=receipt_data
|
||||
)
|
||||
assert response.status_code == 201
|
||||
|
||||
# Step 4: Fetch and verify receipt via wallet
|
||||
response = requests.get(
|
||||
f"{wallet_base_url}/v1/receipts/{receipt_data['id']}"
|
||||
)
|
||||
assert response.status_code == 200
|
||||
|
||||
fetched_receipt = response.json()
|
||||
assert fetched_receipt["id"] == receipt_data["id"]
|
||||
assert fetched_receipt["job_id"] == job_id
|
||||
|
||||
def test_error_handling_flows(self, wallet_base_url):
|
||||
"""Test error handling in various scenarios"""
|
||||
# Test invalid password
|
||||
unlock_data = {
|
||||
"password": "wrong-password",
|
||||
"keystore_path": "/nonexistent/path",
|
||||
}
|
||||
|
||||
response = requests.post(f"{wallet_base_url}/v1/wallets/unlock", json=unlock_data)
|
||||
assert response.status_code == 400
|
||||
assert "error" in response.json()
|
||||
|
||||
# Test invalid session token
|
||||
headers = {"Authorization": "Bearer invalid-token"}
|
||||
|
||||
sign_data = {
|
||||
"message": "Test",
|
||||
"account_address": "0x123",
|
||||
}
|
||||
|
||||
response = requests.post(
|
||||
f"{wallet_base_url}/v1/sign",
|
||||
json=sign_data,
|
||||
headers=headers
|
||||
)
|
||||
assert response.status_code == 401
|
||||
|
||||
# Test invalid receipt format
|
||||
response = requests.post(
|
||||
f"{wallet_base_url}/v1/receipts/verify",
|
||||
json={"receipt": {"invalid": "data"}}
|
||||
)
|
||||
assert response.status_code == 400
|
||||
|
||||
def test_concurrent_operations(self, wallet_base_url, test_wallet_data):
|
||||
"""Test concurrent wallet operations"""
|
||||
import threading
|
||||
import queue
|
||||
|
||||
# Unlock wallet first
|
||||
unlock_data = {
|
||||
"password": "test-password-123",
|
||||
"keystore_path": str(test_wallet_data),
|
||||
}
|
||||
|
||||
response = requests.post(f"{wallet_base_url}/v1/wallets/unlock", json=unlock_data)
|
||||
session_token = response.json()["session_token"]
|
||||
headers = {"Authorization": f"Bearer {session_token}"}
|
||||
|
||||
# Concurrent signing operations
|
||||
results = queue.Queue()
|
||||
|
||||
def sign_message(message_id):
|
||||
sign_data = {
|
||||
"message": f"Test message {message_id}",
|
||||
"account_address": "0x1234567890abcdef",
|
||||
}
|
||||
|
||||
response = requests.post(
|
||||
f"{wallet_base_url}/v1/sign",
|
||||
json=sign_data,
|
||||
headers=headers
|
||||
)
|
||||
results.put((message_id, response.status_code, response.json()))
|
||||
|
||||
# Start 10 concurrent signing operations
|
||||
threads = []
|
||||
for i in range(10):
|
||||
thread = threading.Thread(target=sign_message, args=(i,))
|
||||
threads.append(thread)
|
||||
thread.start()
|
||||
|
||||
# Wait for all threads to complete
|
||||
for thread in threads:
|
||||
thread.join()
|
||||
|
||||
# Verify all operations succeeded
|
||||
success_count = 0
|
||||
while not results.empty():
|
||||
msg_id, status, result = results.get()
|
||||
assert status == 200, f"Message {msg_id} failed"
|
||||
success_count += 1
|
||||
|
||||
assert success_count == 10
|
||||
|
||||
def test_performance_limits(self, wallet_base_url, test_wallet_data):
|
||||
"""Test performance limits and rate limiting"""
|
||||
# Unlock wallet
|
||||
unlock_data = {
|
||||
"password": "test-password-123",
|
||||
"keystore_path": str(test_wallet_data),
|
||||
}
|
||||
|
||||
response = requests.post(f"{wallet_base_url}/v1/wallets/unlock", json=unlock_data)
|
||||
session_token = response.json()["session_token"]
|
||||
headers = {"Authorization": f"Bearer {session_token}"}
|
||||
|
||||
# Test rapid signing requests
|
||||
start_time = time.time()
|
||||
success_count = 0
|
||||
|
||||
for i in range(100):
|
||||
sign_data = {
|
||||
"message": f"Performance test {i}",
|
||||
"account_address": "0x1234567890abcdef",
|
||||
}
|
||||
|
||||
response = requests.post(
|
||||
f"{wallet_base_url}/v1/sign",
|
||||
json=sign_data,
|
||||
headers=headers
|
||||
)
|
||||
|
||||
if response.status_code == 200:
|
||||
success_count += 1
|
||||
elif response.status_code == 429:
|
||||
# Rate limited
|
||||
break
|
||||
|
||||
elapsed_time = time.time() - start_time
|
||||
|
||||
# Should handle at least 50 requests per second
|
||||
assert success_count > 50
|
||||
assert success_count / elapsed_time > 50
|
||||
|
||||
def test_wallet_backup_and_restore(self, wallet_base_url, temp_directory):
|
||||
"""Test wallet backup and restore functionality"""
|
||||
# Step 1: Create wallet with multiple accounts
|
||||
create_data = {
|
||||
"name": "Backup Test Wallet",
|
||||
"password": "backup-password-123",
|
||||
"keystore_path": str(temp_directory),
|
||||
}
|
||||
|
||||
response = requests.post(f"{wallet_base_url}/v1/wallets", json=create_data)
|
||||
wallet = response.json()
|
||||
|
||||
# Add additional account
|
||||
unlock_data = {
|
||||
"password": "backup-password-123",
|
||||
"keystore_path": str(temp_directory),
|
||||
}
|
||||
|
||||
response = requests.post(f"{wallet_base_url}/v1/wallets/unlock", json=unlock_data)
|
||||
session_token = response.json()["session_token"]
|
||||
headers = {"Authorization": f"Bearer {session_token}"}
|
||||
|
||||
response = requests.post(
|
||||
f"{wallet_base_url}/v1/accounts",
|
||||
headers=headers
|
||||
)
|
||||
assert response.status_code == 201
|
||||
|
||||
# Step 2: Create backup
|
||||
backup_path = Path(temp_directory) / "wallet_backup.json"
|
||||
|
||||
response = requests.post(
|
||||
f"{wallet_base_url}/v1/wallets/{wallet['id']}/backup",
|
||||
json={"backup_path": str(backup_path)},
|
||||
headers=headers
|
||||
)
|
||||
assert response.status_code == 200
|
||||
|
||||
# Verify backup exists
|
||||
assert backup_path.exists()
|
||||
|
||||
# Step 3: Restore wallet to new location
|
||||
restore_dir = Path(temp_directory) / "restored"
|
||||
restore_dir.mkdir()
|
||||
|
||||
response = requests.post(
|
||||
f"{wallet_base_url}/v1/wallets/restore",
|
||||
json={
|
||||
"backup_path": str(backup_path),
|
||||
"restore_path": str(restore_dir),
|
||||
"new_password": "restored-password-456",
|
||||
}
|
||||
)
|
||||
assert response.status_code == 200
|
||||
|
||||
restored_wallet = response.json()
|
||||
assert len(restored_wallet["accounts"]) == 2
|
||||
|
||||
# Step 4: Verify restored wallet works
|
||||
unlock_data = {
|
||||
"password": "restored-password-456",
|
||||
"keystore_path": str(restore_dir),
|
||||
}
|
||||
|
||||
response = requests.post(f"{wallet_base_url}/v1/wallets/unlock", json=unlock_data)
|
||||
assert response.status_code == 200
|
||||
|
||||
|
||||
@pytest.mark.e2e
|
||||
class TestWalletSecurityE2E:
|
||||
"""End-to-end security tests for wallet daemon"""
|
||||
|
||||
def test_session_security(self, wallet_base_url, test_wallet_data):
|
||||
"""Test session token security"""
|
||||
# Unlock wallet to get session
|
||||
unlock_data = {
|
||||
"password": "test-password-123",
|
||||
"keystore_path": str(test_wallet_data),
|
||||
}
|
||||
|
||||
response = requests.post(f"{wallet_base_url}/v1/wallets/unlock", json=unlock_data)
|
||||
session_token = response.json()["session_token"]
|
||||
|
||||
# Test session expiration
|
||||
# In real test, this would wait for actual expiration
|
||||
# For now, test invalid token format
|
||||
invalid_tokens = [
|
||||
"",
|
||||
"invalid",
|
||||
"Bearer invalid",
|
||||
"Bearer ",
|
||||
"Bearer " + "A" * 1000, # Too long
|
||||
]
|
||||
|
||||
for token in invalid_tokens:
|
||||
headers = {"Authorization": token}
|
||||
response = requests.get(f"{wallet_base_url}/v1/wallets", headers=headers)
|
||||
assert response.status_code == 401
|
||||
|
||||
def test_input_validation(self, wallet_base_url):
|
||||
"""Test input validation and sanitization"""
|
||||
# Test malicious inputs
|
||||
malicious_inputs = [
|
||||
{"name": "<script>alert('xss')</script>"},
|
||||
{"password": "../../etc/passwd"},
|
||||
{"keystore_path": "/etc/shadow"},
|
||||
{"message": "\x00\x01\x02\x03"},
|
||||
{"account_address": "invalid-address"},
|
||||
]
|
||||
|
||||
for malicious_input in malicious_inputs:
|
||||
response = requests.post(
|
||||
f"{wallet_base_url}/v1/wallets",
|
||||
json=malicious_input
|
||||
)
|
||||
# Should either reject or sanitize
|
||||
assert response.status_code in [400, 422]
|
||||
|
||||
def test_rate_limiting(self, wallet_base_url):
|
||||
"""Test rate limiting on sensitive operations"""
|
||||
# Test unlock rate limiting
|
||||
unlock_data = {
|
||||
"password": "test",
|
||||
"keystore_path": "/nonexistent",
|
||||
}
|
||||
|
||||
# Send rapid requests
|
||||
rate_limited = False
|
||||
for i in range(100):
|
||||
response = requests.post(f"{wallet_base_url}/v1/wallets/unlock", json=unlock_data)
|
||||
if response.status_code == 429:
|
||||
rate_limited = True
|
||||
break
|
||||
|
||||
assert rate_limited, "Rate limiting should be triggered"
|
||||
|
||||
def test_encryption_strength(self, wallet_base_url, temp_directory):
|
||||
"""Test wallet encryption strength"""
|
||||
# Create wallet with strong password
|
||||
create_data = {
|
||||
"name": "Security Test Wallet",
|
||||
"password": "VeryStr0ngP@ssw0rd!2024#SpecialChars",
|
||||
"keystore_path": str(temp_directory),
|
||||
}
|
||||
|
||||
response = requests.post(f"{wallet_base_url}/v1/wallets", json=create_data)
|
||||
assert response.status_code == 201
|
||||
|
||||
# Verify keystore file is encrypted
|
||||
keystore_path = Path(temp_directory) / "security-test-wallet.json"
|
||||
assert keystore_path.exists()
|
||||
|
||||
with open(keystore_path, "r") as f:
|
||||
keystore_data = json.load(f)
|
||||
|
||||
# Check that private keys are encrypted
|
||||
for account in keystore_data.get("accounts", []):
|
||||
assert "encrypted_private_key" in account
|
||||
encrypted_key = account["encrypted_private_key"]
|
||||
# Should not contain plaintext key material
|
||||
assert "BEGIN PRIVATE KEY" not in encrypted_key
|
||||
assert "-----END" not in encrypted_key
|
||||
|
||||
|
||||
@pytest.mark.e2e
|
||||
@pytest.mark.slow
|
||||
class TestWalletPerformanceE2E:
|
||||
"""Performance tests for wallet daemon"""
|
||||
|
||||
def test_large_wallet_performance(self, wallet_base_url, temp_directory):
|
||||
"""Test performance with large number of accounts"""
|
||||
# Create wallet
|
||||
create_data = {
|
||||
"name": "Large Wallet Test",
|
||||
"password": "test-password-123",
|
||||
"keystore_path": str(temp_directory),
|
||||
}
|
||||
|
||||
response = requests.post(f"{wallet_base_url}/v1/wallets", json=create_data)
|
||||
wallet = response.json()
|
||||
|
||||
# Unlock wallet
|
||||
unlock_data = {
|
||||
"password": "test-password-123",
|
||||
"keystore_path": str(temp_directory),
|
||||
}
|
||||
|
||||
response = requests.post(f"{wallet_base_url}/v1/wallets/unlock", json=unlock_data)
|
||||
session_token = response.json()["session_token"]
|
||||
headers = {"Authorization": f"Bearer {session_token}"}
|
||||
|
||||
# Create 100 accounts
|
||||
start_time = time.time()
|
||||
|
||||
for i in range(100):
|
||||
response = requests.post(
|
||||
f"{wallet_base_url}/v1/accounts",
|
||||
headers=headers
|
||||
)
|
||||
assert response.status_code == 201
|
||||
|
||||
creation_time = time.time() - start_time
|
||||
|
||||
# Should create accounts quickly
|
||||
assert creation_time < 10.0, f"Account creation too slow: {creation_time}s"
|
||||
|
||||
# Test listing performance
|
||||
start_time = time.time()
|
||||
|
||||
response = requests.get(
|
||||
f"{wallet_base_url}/v1/wallets/{wallet['id']}",
|
||||
headers=headers
|
||||
)
|
||||
|
||||
listing_time = time.time() - start_time
|
||||
assert response.status_code == 200
|
||||
|
||||
wallet_data = response.json()
|
||||
assert len(wallet_data["accounts"]) == 101
|
||||
assert listing_time < 1.0, f"Wallet listing too slow: {listing_time}s"
|
||||
|
||||
def test_concurrent_wallet_operations(self, wallet_base_url, temp_directory):
|
||||
"""Test concurrent operations on multiple wallets"""
|
||||
import concurrent.futures
|
||||
|
||||
def create_and_use_wallet(wallet_id):
|
||||
wallet_dir = Path(temp_directory) / f"wallet_{wallet_id}"
|
||||
wallet_dir.mkdir()
|
||||
|
||||
# Create wallet
|
||||
create_data = {
|
||||
"name": f"Concurrent Wallet {wallet_id}",
|
||||
"password": f"password-{wallet_id}",
|
||||
"keystore_path": str(wallet_dir),
|
||||
}
|
||||
|
||||
response = requests.post(f"{wallet_base_url}/v1/wallets", json=create_data)
|
||||
assert response.status_code == 201
|
||||
|
||||
# Unlock and sign
|
||||
unlock_data = {
|
||||
"password": f"password-{wallet_id}",
|
||||
"keystore_path": str(wallet_dir),
|
||||
}
|
||||
|
||||
response = requests.post(f"{wallet_base_url}/v1/wallets/unlock", json=unlock_data)
|
||||
session_token = response.json()["session_token"]
|
||||
headers = {"Authorization": f"Bearer {session_token}"}
|
||||
|
||||
sign_data = {
|
||||
"message": f"Message from wallet {wallet_id}",
|
||||
"account_address": "0x1234567890abcdef",
|
||||
}
|
||||
|
||||
response = requests.post(
|
||||
f"{wallet_base_url}/v1/sign",
|
||||
json=sign_data,
|
||||
headers=headers
|
||||
)
|
||||
|
||||
return response.status_code == 200
|
||||
|
||||
# Run 20 concurrent wallet operations
|
||||
with concurrent.futures.ThreadPoolExecutor(max_workers=20) as executor:
|
||||
futures = [executor.submit(create_and_use_wallet, i) for i in range(20)]
|
||||
results = [future.result() for future in concurrent.futures.as_completed(futures)]
|
||||
|
||||
# All operations should succeed
|
||||
assert all(results), "Some concurrent wallet operations failed"
|
||||
533
tests/integration/test_blockchain_node.py
Normal file
533
tests/integration/test_blockchain_node.py
Normal file
@ -0,0 +1,533 @@
|
||||
"""
|
||||
Integration tests for AITBC Blockchain Node
|
||||
"""
|
||||
|
||||
import pytest
|
||||
import asyncio
|
||||
import json
|
||||
import websockets
|
||||
from datetime import datetime, timedelta
|
||||
from unittest.mock import Mock, patch, AsyncMock
|
||||
import requests
|
||||
|
||||
from apps.blockchain_node.src.aitbc_chain.models import Block, Transaction, Receipt, Account
|
||||
from apps.blockchain_node.src.aitbc_chain.consensus.poa import PoAConsensus
|
||||
from apps.blockchain_node.src.aitbc_chain.rpc.router import router
|
||||
from apps.blockchain_node.src.aitbc_chain.rpc.websocket import WebSocketManager
|
||||
|
||||
|
||||
@pytest.mark.integration
|
||||
class TestBlockchainNodeRPC:
|
||||
"""Test blockchain node RPC endpoints"""
|
||||
|
||||
@pytest.fixture
|
||||
def blockchain_client(self):
|
||||
"""Create a test client for blockchain node"""
|
||||
base_url = "http://localhost:8545"
|
||||
return requests.Session()
|
||||
# Note: In real tests, this would connect to a running test instance
|
||||
|
||||
def test_get_block_by_number(self, blockchain_client):
|
||||
"""Test getting block by number"""
|
||||
with patch('apps.blockchain_node.src.aitbc_chain.rpc.handlers.get_block_by_number') as mock_handler:
|
||||
mock_handler.return_value = {
|
||||
"number": 100,
|
||||
"hash": "0x123",
|
||||
"timestamp": datetime.utcnow().timestamp(),
|
||||
"transactions": [],
|
||||
}
|
||||
|
||||
response = blockchain_client.post(
|
||||
"http://localhost:8545",
|
||||
json={
|
||||
"jsonrpc": "2.0",
|
||||
"method": "eth_getBlockByNumber",
|
||||
"params": ["0x64", True],
|
||||
"id": 1
|
||||
}
|
||||
)
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert data["jsonrpc"] == "2.0"
|
||||
assert "result" in data
|
||||
assert data["result"]["number"] == 100
|
||||
|
||||
def test_get_transaction_by_hash(self, blockchain_client):
|
||||
"""Test getting transaction by hash"""
|
||||
with patch('apps.blockchain_node.src.aitbc_chain.rpc.handlers.get_transaction_by_hash') as mock_handler:
|
||||
mock_handler.return_value = {
|
||||
"hash": "0x456",
|
||||
"blockNumber": 100,
|
||||
"from": "0xabc",
|
||||
"to": "0xdef",
|
||||
"value": "1000",
|
||||
"status": "0x1",
|
||||
}
|
||||
|
||||
response = blockchain_client.post(
|
||||
"http://localhost:8545",
|
||||
json={
|
||||
"jsonrpc": "2.0",
|
||||
"method": "eth_getTransactionByHash",
|
||||
"params": ["0x456"],
|
||||
"id": 1
|
||||
}
|
||||
)
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert data["result"]["hash"] == "0x456"
|
||||
|
||||
def test_send_raw_transaction(self, blockchain_client):
|
||||
"""Test sending raw transaction"""
|
||||
with patch('apps.blockchain_node.src.aitbc_chain.rpc.handlers.send_raw_transaction') as mock_handler:
|
||||
mock_handler.return_value = "0x789"
|
||||
|
||||
response = blockchain_client.post(
|
||||
"http://localhost:8545",
|
||||
json={
|
||||
"jsonrpc": "2.0",
|
||||
"method": "eth_sendRawTransaction",
|
||||
"params": ["0xrawtx"],
|
||||
"id": 1
|
||||
}
|
||||
)
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert data["result"] == "0x789"
|
||||
|
||||
def test_get_balance(self, blockchain_client):
|
||||
"""Test getting account balance"""
|
||||
with patch('apps.blockchain_node.src.aitbc_chain.rpc.handlers.get_balance') as mock_handler:
|
||||
mock_handler.return_value = "0x1520F41CC0B40000" # 100000 ETH in wei
|
||||
|
||||
response = blockchain_client.post(
|
||||
"http://localhost:8545",
|
||||
json={
|
||||
"jsonrpc": "2.0",
|
||||
"method": "eth_getBalance",
|
||||
"params": ["0xabc", "latest"],
|
||||
"id": 1
|
||||
}
|
||||
)
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert data["result"] == "0x1520F41CC0B40000"
|
||||
|
||||
def test_get_block_range(self, blockchain_client):
|
||||
"""Test getting a range of blocks"""
|
||||
with patch('apps.blockchain_node.src.aitbc_chain.rpc.handlers.get_block_range') as mock_handler:
|
||||
mock_handler.return_value = [
|
||||
{"number": 100, "hash": "0x100"},
|
||||
{"number": 101, "hash": "0x101"},
|
||||
{"number": 102, "hash": "0x102"},
|
||||
]
|
||||
|
||||
response = blockchain_client.post(
|
||||
"http://localhost:8545",
|
||||
json={
|
||||
"jsonrpc": "2.0",
|
||||
"method": "aitbc_getBlockRange",
|
||||
"params": [100, 102],
|
||||
"id": 1
|
||||
}
|
||||
)
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert len(data["result"]) == 3
|
||||
assert data["result"][0]["number"] == 100
|
||||
|
||||
|
||||
@pytest.mark.integration
|
||||
class TestWebSocketSubscriptions:
|
||||
"""Test WebSocket subscription functionality"""
|
||||
|
||||
async def test_subscribe_new_blocks(self):
|
||||
"""Test subscribing to new blocks"""
|
||||
with patch('websockets.connect') as mock_connect:
|
||||
mock_ws = AsyncMock()
|
||||
mock_connect.return_value.__aenter__.return_value = mock_ws
|
||||
|
||||
# Mock subscription response
|
||||
mock_ws.recv.side_effect = [
|
||||
json.dumps({"id": 1, "result": "0xsubscription"}),
|
||||
json.dumps({
|
||||
"subscription": "0xsubscription",
|
||||
"result": {
|
||||
"number": 101,
|
||||
"hash": "0xnewblock",
|
||||
}
|
||||
})
|
||||
]
|
||||
|
||||
# Connect and subscribe
|
||||
async with websockets.connect("ws://localhost:8546") as ws:
|
||||
await ws.send(json.dumps({
|
||||
"id": 1,
|
||||
"method": "eth_subscribe",
|
||||
"params": ["newHeads"]
|
||||
}))
|
||||
|
||||
# Get subscription ID
|
||||
response = await ws.recv()
|
||||
sub_data = json.loads(response)
|
||||
assert "result" in sub_data
|
||||
|
||||
# Get block notification
|
||||
notification = await ws.recv()
|
||||
block_data = json.loads(notification)
|
||||
assert block_data["result"]["number"] == 101
|
||||
|
||||
async def test_subscribe_pending_transactions(self):
|
||||
"""Test subscribing to pending transactions"""
|
||||
with patch('websockets.connect') as mock_connect:
|
||||
mock_ws = AsyncMock()
|
||||
mock_connect.return_value.__aenter__.return_value = mock_ws
|
||||
|
||||
mock_ws.recv.side_effect = [
|
||||
json.dumps({"id": 1, "result": "0xtxsub"}),
|
||||
json.dumps({
|
||||
"subscription": "0xtxsub",
|
||||
"result": {
|
||||
"hash": "0xtx123",
|
||||
"from": "0xabc",
|
||||
"to": "0xdef",
|
||||
}
|
||||
})
|
||||
]
|
||||
|
||||
async with websockets.connect("ws://localhost:8546") as ws:
|
||||
await ws.send(json.dumps({
|
||||
"id": 1,
|
||||
"method": "eth_subscribe",
|
||||
"params": ["newPendingTransactions"]
|
||||
}))
|
||||
|
||||
response = await ws.recv()
|
||||
assert "result" in response
|
||||
|
||||
notification = await ws.recv()
|
||||
tx_data = json.loads(notification)
|
||||
assert tx_data["result"]["hash"] == "0xtx123"
|
||||
|
||||
async def test_subscribe_logs(self):
|
||||
"""Test subscribing to event logs"""
|
||||
with patch('websockets.connect') as mock_connect:
|
||||
mock_ws = AsyncMock()
|
||||
mock_connect.return_value.__aenter__.return_value = mock_ws
|
||||
|
||||
mock_ws.recv.side_effect = [
|
||||
json.dumps({"id": 1, "result": "0xlogsub"}),
|
||||
json.dumps({
|
||||
"subscription": "0xlogsub",
|
||||
"result": {
|
||||
"address": "0xcontract",
|
||||
"topics": ["0xevent"],
|
||||
"data": "0xdata",
|
||||
}
|
||||
})
|
||||
]
|
||||
|
||||
async with websockets.connect("ws://localhost:8546") as ws:
|
||||
await ws.send(json.dumps({
|
||||
"id": 1,
|
||||
"method": "eth_subscribe",
|
||||
"params": ["logs", {"address": "0xcontract"}]
|
||||
}))
|
||||
|
||||
response = await ws.recv()
|
||||
sub_data = json.loads(response)
|
||||
|
||||
notification = await ws.recv()
|
||||
log_data = json.loads(notification)
|
||||
assert log_data["result"]["address"] == "0xcontract"
|
||||
|
||||
|
||||
@pytest.mark.integration
|
||||
class TestPoAConsensus:
|
||||
"""Test Proof of Authority consensus mechanism"""
|
||||
|
||||
@pytest.fixture
|
||||
def poa_consensus(self):
|
||||
"""Create PoA consensus instance for testing"""
|
||||
validators = [
|
||||
"0xvalidator1",
|
||||
"0xvalidator2",
|
||||
"0xvalidator3",
|
||||
]
|
||||
return PoAConsensus(validators=validators, block_time=1)
|
||||
|
||||
def test_proposer_selection(self, poa_consensus):
|
||||
"""Test proposer selection algorithm"""
|
||||
# Test deterministic proposer selection
|
||||
proposer1 = poa_consensus.get_proposer(100)
|
||||
proposer2 = poa_consensus.get_proposer(101)
|
||||
|
||||
assert proposer1 in poa_consensus.validators
|
||||
assert proposer2 in poa_consensus.validators
|
||||
# Should rotate based on block number
|
||||
assert proposer1 != proposer2
|
||||
|
||||
def test_block_validation(self, poa_consensus):
|
||||
"""Test block validation"""
|
||||
block = Block(
|
||||
number=100,
|
||||
hash="0xblock123",
|
||||
proposer="0xvalidator1",
|
||||
timestamp=datetime.utcnow(),
|
||||
transactions=[],
|
||||
)
|
||||
|
||||
# Valid block
|
||||
assert poa_consensus.validate_block(block) is True
|
||||
|
||||
# Invalid proposer
|
||||
block.proposer = "0xinvalid"
|
||||
assert poa_consensus.validate_block(block) is False
|
||||
|
||||
def test_validator_rotation(self, poa_consensus):
|
||||
"""Test validator rotation schedule"""
|
||||
proposers = []
|
||||
for i in range(10):
|
||||
proposer = poa_consensus.get_proposer(i)
|
||||
proposers.append(proposer)
|
||||
|
||||
# Each validator should have proposed roughly equal times
|
||||
for validator in poa_consensus.validators:
|
||||
count = proposers.count(validator)
|
||||
assert count >= 2 # At least 2 times in 10 blocks
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_block_production_loop(self, poa_consensus):
|
||||
"""Test block production loop"""
|
||||
blocks_produced = []
|
||||
|
||||
async def mock_produce_block():
|
||||
block = Block(
|
||||
number=len(blocks_produced),
|
||||
hash=f"0xblock{len(blocks_produced)}",
|
||||
proposer=poa_consensus.get_proposer(len(blocks_produced)),
|
||||
timestamp=datetime.utcnow(),
|
||||
transactions=[],
|
||||
)
|
||||
blocks_produced.append(block)
|
||||
return block
|
||||
|
||||
# Mock block production
|
||||
with patch.object(poa_consensus, 'produce_block', side_effect=mock_produce_block):
|
||||
# Produce 3 blocks
|
||||
for _ in range(3):
|
||||
block = await poa_consensus.produce_block()
|
||||
assert block.number == len(blocks_produced) - 1
|
||||
|
||||
assert len(blocks_produced) == 3
|
||||
|
||||
|
||||
@pytest.mark.integration
|
||||
class TestCrossChainSettlement:
|
||||
"""Test cross-chain settlement integration"""
|
||||
|
||||
@pytest.fixture
|
||||
def bridge_manager(self):
|
||||
"""Create bridge manager for testing"""
|
||||
from apps.coordinator_api.src.app.services.bridge_manager import BridgeManager
|
||||
return BridgeManager()
|
||||
|
||||
def test_bridge_registration(self, bridge_manager):
|
||||
"""Test bridge registration"""
|
||||
bridge_config = {
|
||||
"bridge_id": "layerzero",
|
||||
"source_chain": "ethereum",
|
||||
"target_chain": "polygon",
|
||||
"endpoint": "https://endpoint.layerzero.network",
|
||||
}
|
||||
|
||||
result = bridge_manager.register_bridge(bridge_config)
|
||||
assert result["success"] is True
|
||||
assert result["bridge_id"] == "layerzero"
|
||||
|
||||
def test_cross_chain_transaction(self, bridge_manager):
|
||||
"""Test cross-chain transaction execution"""
|
||||
with patch.object(bridge_manager, 'execute_cross_chain_tx') as mock_execute:
|
||||
mock_execute.return_value = {
|
||||
"tx_hash": "0xcrosschain",
|
||||
"status": "pending",
|
||||
"source_tx": "0x123",
|
||||
"target_tx": None,
|
||||
}
|
||||
|
||||
result = bridge_manager.execute_cross_chain_tx({
|
||||
"source_chain": "ethereum",
|
||||
"target_chain": "polygon",
|
||||
"amount": "1000",
|
||||
"token": "USDC",
|
||||
"recipient": "0xabc",
|
||||
})
|
||||
|
||||
assert result["tx_hash"] is not None
|
||||
assert result["status"] == "pending"
|
||||
|
||||
def test_settlement_verification(self, bridge_manager):
|
||||
"""Test cross-chain settlement verification"""
|
||||
with patch.object(bridge_manager, 'verify_settlement') as mock_verify:
|
||||
mock_verify.return_value = {
|
||||
"verified": True,
|
||||
"source_tx": "0x123",
|
||||
"target_tx": "0x456",
|
||||
"amount": "1000",
|
||||
"completed_at": datetime.utcnow().isoformat(),
|
||||
}
|
||||
|
||||
result = bridge_manager.verify_settlement("0xcrosschain")
|
||||
|
||||
assert result["verified"] is True
|
||||
assert result["target_tx"] is not None
|
||||
|
||||
|
||||
@pytest.mark.integration
|
||||
class TestNodePeering:
|
||||
"""Test node peering and gossip"""
|
||||
|
||||
@pytest.fixture
|
||||
def peer_manager(self):
|
||||
"""Create peer manager for testing"""
|
||||
from apps.blockchain_node.src.aitbc_chain.p2p.peer_manager import PeerManager
|
||||
return PeerManager()
|
||||
|
||||
def test_peer_discovery(self, peer_manager):
|
||||
"""Test peer discovery"""
|
||||
with patch.object(peer_manager, 'discover_peers') as mock_discover:
|
||||
mock_discover.return_value = [
|
||||
"enode://1@localhost:30301",
|
||||
"enode://2@localhost:30302",
|
||||
"enode://3@localhost:30303",
|
||||
]
|
||||
|
||||
peers = peer_manager.discover_peers()
|
||||
|
||||
assert len(peers) == 3
|
||||
assert all(peer.startswith("enode://") for peer in peers)
|
||||
|
||||
def test_gossip_transaction(self, peer_manager):
|
||||
"""Test transaction gossip"""
|
||||
tx_data = {
|
||||
"hash": "0xgossip",
|
||||
"from": "0xabc",
|
||||
"to": "0xdef",
|
||||
"value": "100",
|
||||
}
|
||||
|
||||
with patch.object(peer_manager, 'gossip_transaction') as mock_gossip:
|
||||
mock_gossip.return_value = {"peers_notified": 5}
|
||||
|
||||
result = peer_manager.gossip_transaction(tx_data)
|
||||
|
||||
assert result["peers_notified"] > 0
|
||||
|
||||
def test_gossip_block(self, peer_manager):
|
||||
"""Test block gossip"""
|
||||
block_data = {
|
||||
"number": 100,
|
||||
"hash": "0xblock100",
|
||||
"transactions": [],
|
||||
}
|
||||
|
||||
with patch.object(peer_manager, 'gossip_block') as mock_gossip:
|
||||
mock_gossip.return_value = {"peers_notified": 5}
|
||||
|
||||
result = peer_manager.gossip_block(block_data)
|
||||
|
||||
assert result["peers_notified"] > 0
|
||||
|
||||
|
||||
@pytest.mark.integration
|
||||
class TestNodeSynchronization:
|
||||
"""Test node synchronization"""
|
||||
|
||||
@pytest.fixture
|
||||
def sync_manager(self):
|
||||
"""Create sync manager for testing"""
|
||||
from apps.blockchain_node.src.aitbc_chain.sync.sync_manager import SyncManager
|
||||
return SyncManager()
|
||||
|
||||
def test_sync_status(self, sync_manager):
|
||||
"""Test synchronization status"""
|
||||
with patch.object(sync_manager, 'get_sync_status') as mock_status:
|
||||
mock_status.return_value = {
|
||||
"syncing": False,
|
||||
"current_block": 100,
|
||||
"highest_block": 100,
|
||||
"starting_block": 0,
|
||||
}
|
||||
|
||||
status = sync_manager.get_sync_status()
|
||||
|
||||
assert status["syncing"] is False
|
||||
assert status["current_block"] == status["highest_block"]
|
||||
|
||||
def test_sync_from_peer(self, sync_manager):
|
||||
"""Test syncing from peer"""
|
||||
with patch.object(sync_manager, 'sync_from_peer') as mock_sync:
|
||||
mock_sync.return_value = {
|
||||
"synced": True,
|
||||
"blocks_synced": 10,
|
||||
"time_taken": 5.0,
|
||||
}
|
||||
|
||||
result = sync_manager.sync_from_peer("enode://peer@localhost:30301")
|
||||
|
||||
assert result["synced"] is True
|
||||
assert result["blocks_synced"] > 0
|
||||
|
||||
|
||||
@pytest.mark.integration
|
||||
class TestNodeMetrics:
|
||||
"""Test node metrics and monitoring"""
|
||||
|
||||
def test_block_metrics(self):
|
||||
"""Test block production metrics"""
|
||||
from apps.blockchain_node.src.aitbc_chain.metrics import block_metrics
|
||||
|
||||
# Record block metrics
|
||||
block_metrics.record_block(100, 2.5)
|
||||
block_metrics.record_block(101, 2.1)
|
||||
|
||||
# Get metrics
|
||||
metrics = block_metrics.get_metrics()
|
||||
|
||||
assert metrics["block_count"] == 2
|
||||
assert metrics["avg_block_time"] == 2.3
|
||||
assert metrics["last_block_number"] == 101
|
||||
|
||||
def test_transaction_metrics(self):
|
||||
"""Test transaction metrics"""
|
||||
from apps.blockchain_node.src.aitbc_chain.metrics import tx_metrics
|
||||
|
||||
# Record transaction metrics
|
||||
tx_metrics.record_transaction("0x123", 1000, True)
|
||||
tx_metrics.record_transaction("0x456", 2000, False)
|
||||
|
||||
metrics = tx_metrics.get_metrics()
|
||||
|
||||
assert metrics["total_txs"] == 2
|
||||
assert metrics["success_rate"] == 0.5
|
||||
assert metrics["total_value"] == 3000
|
||||
|
||||
def test_peer_metrics(self):
|
||||
"""Test peer connection metrics"""
|
||||
from apps.blockchain_node.src.aitbc_chain.metrics import peer_metrics
|
||||
|
||||
# Record peer metrics
|
||||
peer_metrics.record_peer_connected()
|
||||
peer_metrics.record_peer_connected()
|
||||
peer_metrics.record_peer_disconnected()
|
||||
|
||||
metrics = peer_metrics.get_metrics()
|
||||
|
||||
assert metrics["connected_peers"] == 1
|
||||
assert metrics["total_connections"] == 2
|
||||
assert metrics["disconnections"] == 1
|
||||
666
tests/load/locustfile.py
Normal file
666
tests/load/locustfile.py
Normal file
@ -0,0 +1,666 @@
|
||||
"""
|
||||
Load tests for AITBC Marketplace using Locust
|
||||
"""
|
||||
|
||||
from locust import HttpUser, task, between, events
|
||||
from locust.env import Environment
|
||||
from locust.stats import stats_printer, stats_history
|
||||
import json
|
||||
import random
|
||||
import time
|
||||
from datetime import datetime, timedelta
|
||||
import gevent
|
||||
from gevent.pool import Pool
|
||||
|
||||
|
||||
class MarketplaceUser(HttpUser):
|
||||
"""Simulated marketplace user behavior"""
|
||||
|
||||
wait_time = between(1, 3)
|
||||
weight = 10
|
||||
|
||||
def on_start(self):
|
||||
"""Called when a user starts"""
|
||||
# Initialize user session
|
||||
self.user_id = f"user_{random.randint(1000, 9999)}"
|
||||
self.tenant_id = f"tenant_{random.randint(100, 999)}"
|
||||
self.auth_headers = {
|
||||
"X-Tenant-ID": self.tenant_id,
|
||||
"Authorization": f"Bearer token_{self.user_id}",
|
||||
}
|
||||
|
||||
# Create user wallet
|
||||
self.create_wallet()
|
||||
|
||||
# Track user state
|
||||
self.offers_created = []
|
||||
self.bids_placed = []
|
||||
self.balance = 10000.0 # Starting balance in USDC
|
||||
|
||||
def create_wallet(self):
|
||||
"""Create a wallet for the user"""
|
||||
wallet_data = {
|
||||
"name": f"Wallet_{self.user_id}",
|
||||
"password": f"pass_{self.user_id}",
|
||||
}
|
||||
|
||||
response = self.client.post(
|
||||
"/v1/wallets",
|
||||
json=wallet_data,
|
||||
headers=self.auth_headers
|
||||
)
|
||||
|
||||
if response.status_code == 201:
|
||||
self.wallet_id = response.json()["id"]
|
||||
else:
|
||||
self.wallet_id = f"wallet_{self.user_id}"
|
||||
|
||||
@task(3)
|
||||
def browse_offers(self):
|
||||
"""Browse marketplace offers"""
|
||||
params = {
|
||||
"limit": 20,
|
||||
"offset": random.randint(0, 100),
|
||||
"service_type": random.choice([
|
||||
"ai_inference",
|
||||
"image_generation",
|
||||
"video_processing",
|
||||
"data_analytics",
|
||||
]),
|
||||
}
|
||||
|
||||
with self.client.get(
|
||||
"/v1/marketplace/offers",
|
||||
params=params,
|
||||
headers=self.auth_headers,
|
||||
catch_response=True,
|
||||
) as response:
|
||||
if response.status_code == 200:
|
||||
data = response.json()
|
||||
offers = data.get("items", [])
|
||||
# Simulate user viewing offers
|
||||
if offers:
|
||||
self.view_offer_details(random.choice(offers)["id"])
|
||||
response.success()
|
||||
else:
|
||||
response.failure(f"Failed to browse offers: {response.status_code}")
|
||||
|
||||
def view_offer_details(self, offer_id):
|
||||
"""View detailed offer information"""
|
||||
with self.client.get(
|
||||
f"/v1/marketplace/offers/{offer_id}",
|
||||
headers=self.auth_headers,
|
||||
catch_response=True,
|
||||
) as response:
|
||||
if response.status_code == 200:
|
||||
response.success()
|
||||
else:
|
||||
response.failure(f"Failed to view offer: {response.status_code}")
|
||||
|
||||
@task(2)
|
||||
def create_offer(self):
|
||||
"""Create a new marketplace offer"""
|
||||
if self.balance < 100:
|
||||
return # Insufficient balance
|
||||
|
||||
offer_data = {
|
||||
"service_type": random.choice([
|
||||
"ai_inference",
|
||||
"image_generation",
|
||||
"video_processing",
|
||||
"data_analytics",
|
||||
"scientific_computing",
|
||||
]),
|
||||
"pricing": {
|
||||
"per_hour": round(random.uniform(0.1, 5.0), 2),
|
||||
"per_unit": round(random.uniform(0.001, 0.1), 4),
|
||||
},
|
||||
"capacity": random.randint(10, 1000),
|
||||
"requirements": {
|
||||
"gpu_memory": random.choice(["8GB", "16GB", "32GB", "64GB"]),
|
||||
"cpu_cores": random.randint(4, 32),
|
||||
"ram": random.choice(["16GB", "32GB", "64GB", "128GB"]),
|
||||
},
|
||||
"availability": {
|
||||
"start_time": (datetime.utcnow() + timedelta(hours=1)).isoformat(),
|
||||
"end_time": (datetime.utcnow() + timedelta(days=30)).isoformat(),
|
||||
},
|
||||
}
|
||||
|
||||
with self.client.post(
|
||||
"/v1/marketplace/offers",
|
||||
json=offer_data,
|
||||
headers=self.auth_headers,
|
||||
catch_response=True,
|
||||
) as response:
|
||||
if response.status_code == 201:
|
||||
offer = response.json()
|
||||
self.offers_created.append(offer["id"])
|
||||
response.success()
|
||||
else:
|
||||
response.failure(f"Failed to create offer: {response.status_code}")
|
||||
|
||||
@task(3)
|
||||
def place_bid(self):
|
||||
"""Place a bid on an existing offer"""
|
||||
# First get available offers
|
||||
with self.client.get(
|
||||
"/v1/marketplace/offers",
|
||||
params={"limit": 10, "status": "active"},
|
||||
headers=self.auth_headers,
|
||||
) as response:
|
||||
if response.status_code != 200:
|
||||
return
|
||||
|
||||
offers = response.json().get("items", [])
|
||||
if not offers:
|
||||
return
|
||||
|
||||
# Select random offer
|
||||
offer = random.choice(offers)
|
||||
|
||||
# Calculate bid amount
|
||||
max_price = offer["pricing"]["per_hour"]
|
||||
bid_price = round(max_price * random.uniform(0.8, 0.95), 2)
|
||||
|
||||
if self.balance < bid_price:
|
||||
return
|
||||
|
||||
bid_data = {
|
||||
"offer_id": offer["id"],
|
||||
"quantity": random.randint(1, min(10, offer["capacity"])),
|
||||
"max_price": bid_price,
|
||||
"duration_hours": random.randint(1, 24),
|
||||
}
|
||||
|
||||
with self.client.post(
|
||||
"/v1/marketplace/bids",
|
||||
json=bid_data,
|
||||
headers=self.auth_headers,
|
||||
catch_response=True,
|
||||
) as response:
|
||||
if response.status_code == 201:
|
||||
bid = response.json()
|
||||
self.bids_placed.append(bid["id"])
|
||||
self.balance -= bid_price * bid_data["quantity"]
|
||||
response.success()
|
||||
else:
|
||||
response.failure(f"Failed to place bid: {response.status_code}")
|
||||
|
||||
@task(2)
|
||||
def check_bids(self):
|
||||
"""Check status of placed bids"""
|
||||
if not self.bids_placed:
|
||||
return
|
||||
|
||||
bid_id = random.choice(self.bids_placed)
|
||||
|
||||
with self.client.get(
|
||||
f"/v1/marketplace/bids/{bid_id}",
|
||||
headers=self.auth_headers,
|
||||
catch_response=True,
|
||||
) as response:
|
||||
if response.status_code == 200:
|
||||
bid = response.json()
|
||||
|
||||
# If bid is accepted, create transaction
|
||||
if bid["status"] == "accepted":
|
||||
self.create_transaction(bid)
|
||||
|
||||
response.success()
|
||||
else:
|
||||
response.failure(f"Failed to check bid: {response.status_code}")
|
||||
|
||||
def create_transaction(self, bid):
|
||||
"""Create transaction for accepted bid"""
|
||||
tx_data = {
|
||||
"bid_id": bid["id"],
|
||||
"payment_method": "wallet",
|
||||
"confirmations": True,
|
||||
}
|
||||
|
||||
with self.client.post(
|
||||
"/v1/marketplace/transactions",
|
||||
json=tx_data,
|
||||
headers=self.auth_headers,
|
||||
catch_response=True,
|
||||
) as response:
|
||||
if response.status_code == 201:
|
||||
response.success()
|
||||
else:
|
||||
response.failure(f"Failed to create transaction: {response.status_code}")
|
||||
|
||||
@task(1)
|
||||
def get_marketplace_stats(self):
|
||||
"""Get marketplace statistics"""
|
||||
with self.client.get(
|
||||
"/v1/marketplace/stats",
|
||||
headers=self.auth_headers,
|
||||
catch_response=True,
|
||||
) as response:
|
||||
if response.status_code == 200:
|
||||
response.success()
|
||||
else:
|
||||
response.failure(f"Failed to get stats: {response.status_code}")
|
||||
|
||||
@task(1)
|
||||
def search_services(self):
|
||||
"""Search for specific services"""
|
||||
query = random.choice([
|
||||
"AI inference",
|
||||
"image generation",
|
||||
"video rendering",
|
||||
"data processing",
|
||||
"machine learning",
|
||||
])
|
||||
|
||||
params = {
|
||||
"q": query,
|
||||
"limit": 20,
|
||||
"min_price": random.uniform(0.1, 1.0),
|
||||
"max_price": random.uniform(5.0, 10.0),
|
||||
}
|
||||
|
||||
with self.client.get(
|
||||
"/v1/marketplace/search",
|
||||
params=params,
|
||||
headers=self.auth_headers,
|
||||
catch_response=True,
|
||||
) as response:
|
||||
if response.status_code == 200:
|
||||
response.success()
|
||||
else:
|
||||
response.failure(f"Failed to search: {response.status_code}")
|
||||
|
||||
|
||||
class MarketplaceProvider(HttpUser):
|
||||
"""Simulated service provider behavior"""
|
||||
|
||||
wait_time = between(5, 15)
|
||||
weight = 3
|
||||
|
||||
def on_start(self):
|
||||
"""Initialize provider"""
|
||||
self.provider_id = f"provider_{random.randint(100, 999)}"
|
||||
self.tenant_id = f"tenant_{random.randint(100, 999)}"
|
||||
self.auth_headers = {
|
||||
"X-Tenant-ID": self.tenant_id,
|
||||
"Authorization": f"Bearer provider_token_{self.provider_id}",
|
||||
}
|
||||
|
||||
# Register as provider
|
||||
self.register_provider()
|
||||
|
||||
# Provider services
|
||||
self.services = []
|
||||
|
||||
def register_provider(self):
|
||||
"""Register as a service provider"""
|
||||
provider_data = {
|
||||
"name": f"Provider_{self.provider_id}",
|
||||
"description": "AI/ML computing services provider",
|
||||
"endpoint": f"https://provider-{self.provider_id}.aitbc.io",
|
||||
"capabilities": [
|
||||
"ai_inference",
|
||||
"image_generation",
|
||||
"video_processing",
|
||||
],
|
||||
"infrastructure": {
|
||||
"gpu_count": random.randint(10, 100),
|
||||
"cpu_cores": random.randint(100, 1000),
|
||||
"memory_gb": random.randint(500, 5000),
|
||||
},
|
||||
}
|
||||
|
||||
self.client.post(
|
||||
"/v1/marketplace/providers/register",
|
||||
json=provider_data,
|
||||
headers=self.auth_headers
|
||||
)
|
||||
|
||||
@task(4)
|
||||
def update_service_status(self):
|
||||
"""Update status of provider services"""
|
||||
if not self.services:
|
||||
return
|
||||
|
||||
service = random.choice(self.services)
|
||||
|
||||
status_data = {
|
||||
"service_id": service["id"],
|
||||
"status": random.choice(["available", "busy", "maintenance"]),
|
||||
"utilization": random.uniform(0.1, 0.9),
|
||||
"queue_length": random.randint(0, 20),
|
||||
}
|
||||
|
||||
with self.client.patch(
|
||||
f"/v1/marketplace/services/{service['id']}/status",
|
||||
json=status_data,
|
||||
headers=self.auth_headers,
|
||||
catch_response=True,
|
||||
) as response:
|
||||
if response.status_code == 200:
|
||||
response.success()
|
||||
else:
|
||||
response.failure(f"Failed to update status: {response.status_code}")
|
||||
|
||||
@task(3)
|
||||
def create_bulk_offers(self):
|
||||
"""Create multiple offers at once"""
|
||||
offers = []
|
||||
|
||||
for _ in range(random.randint(5, 15)):
|
||||
offer_data = {
|
||||
"service_type": random.choice([
|
||||
"ai_inference",
|
||||
"image_generation",
|
||||
"video_processing",
|
||||
]),
|
||||
"pricing": {
|
||||
"per_hour": round(random.uniform(0.5, 3.0), 2),
|
||||
},
|
||||
"capacity": random.randint(50, 500),
|
||||
"requirements": {
|
||||
"gpu_memory": "16GB",
|
||||
"cpu_cores": 16,
|
||||
},
|
||||
}
|
||||
offers.append(offer_data)
|
||||
|
||||
bulk_data = {"offers": offers}
|
||||
|
||||
with self.client.post(
|
||||
"/v1/marketplace/offers/bulk",
|
||||
json=bulk_data,
|
||||
headers=self.auth_headers,
|
||||
catch_response=True,
|
||||
) as response:
|
||||
if response.status_code == 201:
|
||||
created = response.json().get("created", [])
|
||||
self.services.extend(created)
|
||||
response.success()
|
||||
else:
|
||||
response.failure(f"Failed to create bulk offers: {response.status_code}")
|
||||
|
||||
@task(2)
|
||||
def respond_to_bids(self):
|
||||
"""Respond to incoming bids"""
|
||||
with self.client.get(
|
||||
"/v1/marketplace/bids",
|
||||
params={"provider_id": self.provider_id, "status": "pending"},
|
||||
headers=self.auth_headers,
|
||||
) as response:
|
||||
if response.status_code != 200:
|
||||
return
|
||||
|
||||
bids = response.json().get("items", [])
|
||||
if not bids:
|
||||
return
|
||||
|
||||
# Respond to random bid
|
||||
bid = random.choice(bids)
|
||||
action = random.choice(["accept", "reject", "counter"])
|
||||
|
||||
response_data = {
|
||||
"bid_id": bid["id"],
|
||||
"action": action,
|
||||
}
|
||||
|
||||
if action == "counter":
|
||||
response_data["counter_price"] = round(
|
||||
bid["max_price"] * random.uniform(1.05, 1.15), 2
|
||||
)
|
||||
|
||||
with self.client.post(
|
||||
"/v1/marketplace/bids/respond",
|
||||
json=response_data,
|
||||
headers=self.auth_headers,
|
||||
catch_response=True,
|
||||
) as response:
|
||||
if response.status_code == 200:
|
||||
response.success()
|
||||
else:
|
||||
response.failure(f"Failed to respond to bid: {response.status_code}")
|
||||
|
||||
@task(1)
|
||||
def get_provider_analytics(self):
|
||||
"""Get provider analytics"""
|
||||
with self.client.get(
|
||||
f"/v1/marketplace/providers/{self.provider_id}/analytics",
|
||||
headers=self.auth_headers,
|
||||
catch_response=True,
|
||||
) as response:
|
||||
if response.status_code == 200:
|
||||
response.success()
|
||||
else:
|
||||
response.failure(f"Failed to get analytics: {response.status_code}")
|
||||
|
||||
|
||||
class MarketplaceAdmin(HttpUser):
|
||||
"""Simulated admin user behavior"""
|
||||
|
||||
wait_time = between(10, 30)
|
||||
weight = 1
|
||||
|
||||
def on_start(self):
|
||||
"""Initialize admin"""
|
||||
self.auth_headers = {
|
||||
"Authorization": "Bearer admin_token_123",
|
||||
"X-Admin-Access": "true",
|
||||
}
|
||||
|
||||
@task(3)
|
||||
def monitor_marketplace_health(self):
|
||||
"""Monitor marketplace health metrics"""
|
||||
endpoints = [
|
||||
"/v1/marketplace/health",
|
||||
"/v1/marketplace/metrics",
|
||||
"/v1/marketplace/stats",
|
||||
]
|
||||
|
||||
endpoint = random.choice(endpoints)
|
||||
|
||||
with self.client.get(
|
||||
endpoint,
|
||||
headers=self.auth_headers,
|
||||
catch_response=True,
|
||||
) as response:
|
||||
if response.status_code == 200:
|
||||
response.success()
|
||||
else:
|
||||
response.failure(f"Health check failed: {response.status_code}")
|
||||
|
||||
@task(2)
|
||||
def review_suspicious_activity(self):
|
||||
"""Review suspicious marketplace activity"""
|
||||
with self.client.get(
|
||||
"/v1/admin/marketplace/activity",
|
||||
params={
|
||||
"suspicious_only": True,
|
||||
"limit": 50,
|
||||
},
|
||||
headers=self.auth_headers,
|
||||
catch_response=True,
|
||||
) as response:
|
||||
if response.status_code == 200:
|
||||
activities = response.json().get("items", [])
|
||||
|
||||
# Take action on suspicious activities
|
||||
for activity in activities[:5]: # Limit to 5 actions
|
||||
self.take_action(activity["id"])
|
||||
|
||||
response.success()
|
||||
else:
|
||||
response.failure(f"Failed to review activity: {response.status_code}")
|
||||
|
||||
def take_action(self, activity_id):
|
||||
"""Take action on suspicious activity"""
|
||||
action = random.choice(["warn", "suspend", "investigate"])
|
||||
|
||||
with self.client.post(
|
||||
f"/v1/admin/marketplace/activity/{activity_id}/action",
|
||||
json={"action": action},
|
||||
headers=self.auth_headers,
|
||||
catch_response=True,
|
||||
) as response:
|
||||
if response.status_code in [200, 404]:
|
||||
response.success()
|
||||
else:
|
||||
response.failure(f"Failed to take action: {response.status_code}")
|
||||
|
||||
@task(1)
|
||||
def generate_reports(self):
|
||||
"""Generate marketplace reports"""
|
||||
report_types = [
|
||||
"daily_summary",
|
||||
"weekly_analytics",
|
||||
"provider_performance",
|
||||
"user_activity",
|
||||
]
|
||||
|
||||
report_type = random.choice(report_types)
|
||||
|
||||
with self.client.post(
|
||||
"/v1/admin/marketplace/reports",
|
||||
json={
|
||||
"type": report_type,
|
||||
"format": "json",
|
||||
"email": f"admin@aitbc.io",
|
||||
},
|
||||
headers=self.auth_headers,
|
||||
catch_response=True,
|
||||
) as response:
|
||||
if response.status_code == 202:
|
||||
response.success()
|
||||
else:
|
||||
response.failure(f"Failed to generate report: {response.status_code}")
|
||||
|
||||
|
||||
# Custom event handlers for monitoring
|
||||
@events.request.add_listener
|
||||
def on_request(request_type, name, response_time, response_length, exception, **kwargs):
|
||||
"""Custom request handler for additional metrics"""
|
||||
if exception:
|
||||
print(f"Request failed: {name} - {exception}")
|
||||
elif response_time > 5000: # Log slow requests
|
||||
print(f"Slow request: {name} - {response_time}ms")
|
||||
|
||||
|
||||
@events.test_start.add_listener
|
||||
def on_test_start(environment, **kwargs):
|
||||
"""Called when test starts"""
|
||||
print("Starting marketplace load test")
|
||||
print(f"Target: {environment.host}")
|
||||
|
||||
|
||||
@events.test_stop.add_listener
|
||||
def on_test_stop(environment, **kwargs):
|
||||
"""Called when test stops"""
|
||||
print("\nLoad test completed")
|
||||
|
||||
# Print summary statistics
|
||||
stats = environment.stats
|
||||
|
||||
print(f"\nTotal requests: {stats.total.num_requests}")
|
||||
print(f"Failures: {stats.total.num_failures}")
|
||||
print(f"Average response time: {stats.total.avg_response_time:.2f}ms")
|
||||
print(f"95th percentile: {stats.total.get_response_time_percentile(0.95):.2f}ms")
|
||||
print(f"Requests per second: {stats.total.current_rps:.2f}")
|
||||
|
||||
|
||||
# Custom load shapes
|
||||
class GradualLoadShape:
|
||||
"""Gradually increase load over time"""
|
||||
|
||||
def __init__(self, max_users=100, spawn_rate=10):
|
||||
self.max_users = max_users
|
||||
self.spawn_rate = spawn_rate
|
||||
|
||||
def tick(self):
|
||||
run_time = time.time() - self.start_time
|
||||
|
||||
if run_time < 60: # First minute: ramp up
|
||||
return int(self.spawn_rate * run_time / 60)
|
||||
elif run_time < 300: # Next 4 minutes: maintain
|
||||
return self.max_users
|
||||
else: # Last minute: ramp down
|
||||
remaining = 360 - run_time
|
||||
return int(self.max_users * remaining / 60)
|
||||
|
||||
|
||||
class BurstLoadShape:
|
||||
"""Burst traffic pattern"""
|
||||
|
||||
def __init__(self, burst_size=50, normal_size=10):
|
||||
self.burst_size = burst_size
|
||||
self.normal_size = normal_size
|
||||
|
||||
def tick(self):
|
||||
run_time = time.time() - self.start_time
|
||||
|
||||
# Burst every 30 seconds for 10 seconds
|
||||
if int(run_time) % 30 < 10:
|
||||
return self.burst_size
|
||||
else:
|
||||
return self.normal_size
|
||||
|
||||
|
||||
# Performance monitoring
|
||||
class PerformanceMonitor:
|
||||
"""Monitor performance during load test"""
|
||||
|
||||
def __init__(self):
|
||||
self.metrics = {
|
||||
"response_times": [],
|
||||
"error_rates": [],
|
||||
"throughput": [],
|
||||
}
|
||||
|
||||
def record_request(self, response_time, success):
|
||||
"""Record request metrics"""
|
||||
self.metrics["response_times"].append(response_time)
|
||||
self.metrics["error_rates"].append(0 if success else 1)
|
||||
|
||||
def get_summary(self):
|
||||
"""Get performance summary"""
|
||||
if not self.metrics["response_times"]:
|
||||
return {}
|
||||
|
||||
return {
|
||||
"avg_response_time": sum(self.metrics["response_times"]) / len(self.metrics["response_times"]),
|
||||
"max_response_time": max(self.metrics["response_times"]),
|
||||
"error_rate": sum(self.metrics["error_rates"]) / len(self.metrics["error_rates"]),
|
||||
"total_requests": len(self.metrics["response_times"]),
|
||||
}
|
||||
|
||||
|
||||
# Test configuration
|
||||
if __name__ == "__main__":
|
||||
# Setup environment
|
||||
env = Environment(user_classes=[MarketplaceUser, MarketplaceProvider, MarketplaceAdmin])
|
||||
|
||||
# Create performance monitor
|
||||
monitor = PerformanceMonitor()
|
||||
|
||||
# Setup host
|
||||
env.host = "http://localhost:8001"
|
||||
|
||||
# Setup load shape
|
||||
env.create_local_runner()
|
||||
|
||||
# Start web UI for monitoring
|
||||
env.create_web_ui("127.0.0.1", 8089)
|
||||
|
||||
# Start the load test
|
||||
print("Starting marketplace load test...")
|
||||
print("Web UI available at: http://127.0.0.1:8089")
|
||||
|
||||
# Run for 6 minutes
|
||||
env.runner.start(100, spawn_rate=10)
|
||||
gevent.spawn_later(360, env.runner.stop)
|
||||
|
||||
# Print stats
|
||||
gevent.spawn(stats_printer(env.stats))
|
||||
|
||||
# Wait for test to complete
|
||||
env.runner.greenlet.join()
|
||||
79
tests/pytest.ini
Normal file
79
tests/pytest.ini
Normal file
@ -0,0 +1,79 @@
|
||||
[tool:pytest]
|
||||
# pytest configuration for AITBC
|
||||
|
||||
# Test discovery
|
||||
testpaths = tests
|
||||
python_files = test_*.py *_test.py
|
||||
python_classes = Test*
|
||||
python_functions = test_*
|
||||
|
||||
# Path configuration
|
||||
addopts =
|
||||
--strict-markers
|
||||
--strict-config
|
||||
--verbose
|
||||
--tb=short
|
||||
--cov=apps
|
||||
--cov=packages
|
||||
--cov-report=html:htmlcov
|
||||
--cov-report=term-missing
|
||||
--cov-fail-under=80
|
||||
|
||||
# Import paths
|
||||
import_paths =
|
||||
.
|
||||
apps
|
||||
packages
|
||||
|
||||
# Markers
|
||||
markers =
|
||||
unit: Unit tests (fast, isolated)
|
||||
integration: Integration tests (require external services)
|
||||
e2e: End-to-end tests (full system)
|
||||
performance: Performance tests (measure speed/memory)
|
||||
security: Security tests (vulnerability scanning)
|
||||
slow: Slow tests (run separately)
|
||||
gpu: Tests requiring GPU resources
|
||||
confidential: Tests for confidential transactions
|
||||
multitenant: Multi-tenancy specific tests
|
||||
|
||||
# Minimum version
|
||||
minversion = 6.0
|
||||
|
||||
# Test session configuration
|
||||
timeout = 300
|
||||
timeout_method = thread
|
||||
|
||||
# Logging
|
||||
log_cli = true
|
||||
log_cli_level = INFO
|
||||
log_cli_format = %(asctime)s [%(levelname)8s] %(name)s: %(message)s
|
||||
log_cli_date_format = %Y-%m-%d %H:%M:%S
|
||||
|
||||
# Warnings
|
||||
filterwarnings =
|
||||
error
|
||||
ignore::UserWarning
|
||||
ignore::DeprecationWarning
|
||||
ignore::PendingDeprecationWarning
|
||||
|
||||
# Async configuration
|
||||
asyncio_mode = auto
|
||||
|
||||
# Parallel execution
|
||||
# Uncomment to enable parallel testing (requires pytest-xdist)
|
||||
# addopts = -n auto
|
||||
|
||||
# Custom configuration files
|
||||
ini_options =
|
||||
markers = [
|
||||
"unit: Unit tests",
|
||||
"integration: Integration tests",
|
||||
"e2e: End-to-end tests",
|
||||
"performance: Performance tests",
|
||||
"security: Security tests",
|
||||
"slow: Slow tests",
|
||||
"gpu: GPU tests",
|
||||
"confidential: Confidential transaction tests",
|
||||
"multitenant: Multi-tenancy tests"
|
||||
]
|
||||
700
tests/security/test_confidential_transactions.py
Normal file
700
tests/security/test_confidential_transactions.py
Normal file
@ -0,0 +1,700 @@
|
||||
"""
|
||||
Security tests for AITBC Confidential Transactions
|
||||
"""
|
||||
|
||||
import pytest
|
||||
import json
|
||||
from datetime import datetime, timedelta
|
||||
from unittest.mock import Mock, patch, AsyncMock
|
||||
from cryptography.hazmat.primitives.asymmetric import x25519
|
||||
from cryptography.hazmat.primitives.ciphers.aead import AESGCM
|
||||
from cryptography.hazmat.primitives import hashes
|
||||
from cryptography.hazmat.primitives.kdf.hkdf import HKDF
|
||||
|
||||
from apps.coordinator_api.src.app.services.confidential_service import ConfidentialTransactionService
|
||||
from apps.coordinator_api.src.app.models.confidential import ConfidentialTransaction, ViewingKey
|
||||
from packages.py.aitbc_crypto import encrypt_data, decrypt_data, generate_viewing_key
|
||||
|
||||
|
||||
@pytest.mark.security
|
||||
class TestConfidentialTransactionSecurity:
|
||||
"""Security tests for confidential transaction functionality"""
|
||||
|
||||
@pytest.fixture
|
||||
def confidential_service(self, db_session):
|
||||
"""Create confidential transaction service"""
|
||||
return ConfidentialTransactionService(db_session)
|
||||
|
||||
@pytest.fixture
|
||||
def sample_sender_keys(self):
|
||||
"""Generate sender's key pair"""
|
||||
private_key = x25519.X25519PrivateKey.generate()
|
||||
public_key = private_key.public_key()
|
||||
return private_key, public_key
|
||||
|
||||
@pytest.fixture
|
||||
def sample_receiver_keys(self):
|
||||
"""Generate receiver's key pair"""
|
||||
private_key = x25519.X25519PrivateKey.generate()
|
||||
public_key = private_key.public_key()
|
||||
return private_key, public_key
|
||||
|
||||
def test_encryption_confidentiality(self, sample_sender_keys, sample_receiver_keys):
|
||||
"""Test that transaction data remains confidential"""
|
||||
sender_private, sender_public = sample_sender_keys
|
||||
receiver_private, receiver_public = sample_receiver_keys
|
||||
|
||||
# Original transaction data
|
||||
transaction_data = {
|
||||
"sender": "0x1234567890abcdef",
|
||||
"receiver": "0xfedcba0987654321",
|
||||
"amount": 1000000, # 1 USDC
|
||||
"asset": "USDC",
|
||||
"nonce": 12345,
|
||||
}
|
||||
|
||||
# Encrypt for receiver only
|
||||
ciphertext = encrypt_data(
|
||||
data=json.dumps(transaction_data),
|
||||
sender_key=sender_private,
|
||||
receiver_key=receiver_public
|
||||
)
|
||||
|
||||
# Verify ciphertext doesn't reveal plaintext
|
||||
assert transaction_data["sender"] not in ciphertext
|
||||
assert transaction_data["receiver"] not in ciphertext
|
||||
assert str(transaction_data["amount"]) not in ciphertext
|
||||
|
||||
# Only receiver can decrypt
|
||||
decrypted = decrypt_data(
|
||||
ciphertext=ciphertext,
|
||||
receiver_key=receiver_private,
|
||||
sender_key=sender_public
|
||||
)
|
||||
|
||||
decrypted_data = json.loads(decrypted)
|
||||
assert decrypted_data == transaction_data
|
||||
|
||||
def test_viewing_key_generation(self):
|
||||
"""Test secure viewing key generation"""
|
||||
# Generate viewing key for auditor
|
||||
viewing_key = generate_viewing_key(
|
||||
purpose="audit",
|
||||
expires_at=datetime.utcnow() + timedelta(days=30),
|
||||
permissions=["view_amount", "view_parties"]
|
||||
)
|
||||
|
||||
# Verify key structure
|
||||
assert "key_id" in viewing_key
|
||||
assert "key_data" in viewing_key
|
||||
assert "expires_at" in viewing_key
|
||||
assert "permissions" in viewing_key
|
||||
|
||||
# Verify key entropy
|
||||
assert len(viewing_key["key_data"]) >= 32 # At least 256 bits
|
||||
|
||||
# Verify expiration
|
||||
assert viewing_key["expires_at"] > datetime.utcnow()
|
||||
|
||||
def test_viewing_key_permissions(self, confidential_service):
|
||||
"""Test that viewing keys respect permission constraints"""
|
||||
# Create confidential transaction
|
||||
tx = ConfidentialTransaction(
|
||||
id="confidential-tx-123",
|
||||
ciphertext="encrypted_data_here",
|
||||
sender_key="sender_pubkey",
|
||||
receiver_key="receiver_pubkey",
|
||||
created_at=datetime.utcnow(),
|
||||
)
|
||||
|
||||
# Create viewing key with limited permissions
|
||||
viewing_key = ViewingKey(
|
||||
id="view-key-123",
|
||||
transaction_id=tx.id,
|
||||
key_data="encrypted_viewing_key",
|
||||
permissions=["view_amount"],
|
||||
expires_at=datetime.utcnow() + timedelta(days=1),
|
||||
created_at=datetime.utcnow(),
|
||||
)
|
||||
|
||||
# Test permission enforcement
|
||||
with patch.object(confidential_service, 'decrypt_with_viewing_key') as mock_decrypt:
|
||||
mock_decrypt.return_value = {"amount": 1000}
|
||||
|
||||
# Should succeed with valid permission
|
||||
result = confidential_service.view_transaction(
|
||||
tx.id,
|
||||
viewing_key.id,
|
||||
fields=["amount"]
|
||||
)
|
||||
assert "amount" in result
|
||||
|
||||
# Should fail with invalid permission
|
||||
with pytest.raises(PermissionError):
|
||||
confidential_service.view_transaction(
|
||||
tx.id,
|
||||
viewing_key.id,
|
||||
fields=["sender", "receiver"] # Not permitted
|
||||
)
|
||||
|
||||
def test_key_rotation_security(self, confidential_service):
|
||||
"""Test secure key rotation"""
|
||||
# Create initial keys
|
||||
old_key = x25519.X25519PrivateKey.generate()
|
||||
new_key = x25519.X25519PrivateKey.generate()
|
||||
|
||||
# Test key rotation process
|
||||
rotation_result = confidential_service.rotate_keys(
|
||||
transaction_id="tx-123",
|
||||
old_key=old_key,
|
||||
new_key=new_key
|
||||
)
|
||||
|
||||
assert rotation_result["success"] is True
|
||||
assert "new_ciphertext" in rotation_result
|
||||
assert "rotation_id" in rotation_result
|
||||
|
||||
# Verify old key can't decrypt new ciphertext
|
||||
with pytest.raises(Exception):
|
||||
decrypt_data(
|
||||
ciphertext=rotation_result["new_ciphertext"],
|
||||
receiver_key=old_key,
|
||||
sender_key=old_key.public_key()
|
||||
)
|
||||
|
||||
# Verify new key can decrypt
|
||||
decrypted = decrypt_data(
|
||||
ciphertext=rotation_result["new_ciphertext"],
|
||||
receiver_key=new_key,
|
||||
sender_key=new_key.public_key()
|
||||
)
|
||||
assert decrypted is not None
|
||||
|
||||
def test_transaction_replay_protection(self, confidential_service):
|
||||
"""Test protection against transaction replay"""
|
||||
# Create transaction with nonce
|
||||
transaction = {
|
||||
"sender": "0x123",
|
||||
"receiver": "0x456",
|
||||
"amount": 1000,
|
||||
"nonce": 12345,
|
||||
"timestamp": datetime.utcnow().isoformat(),
|
||||
}
|
||||
|
||||
# Store nonce
|
||||
confidential_service.store_nonce(12345, "tx-123")
|
||||
|
||||
# Try to replay with same nonce
|
||||
with pytest.raises(ValueError, match="nonce already used"):
|
||||
confidential_service.validate_transaction_nonce(
|
||||
transaction["nonce"],
|
||||
transaction["sender"]
|
||||
)
|
||||
|
||||
def test_side_channel_resistance(self, confidential_service):
|
||||
"""Test resistance to timing attacks"""
|
||||
import time
|
||||
|
||||
# Create transactions with different amounts
|
||||
small_amount = {"amount": 1}
|
||||
large_amount = {"amount": 1000000}
|
||||
|
||||
# Encrypt both
|
||||
small_cipher = encrypt_data(
|
||||
json.dumps(small_amount),
|
||||
x25519.X25519PrivateKey.generate(),
|
||||
x25519.X25519PrivateKey.generate().public_key()
|
||||
)
|
||||
|
||||
large_cipher = encrypt_data(
|
||||
json.dumps(large_amount),
|
||||
x25519.X25519PrivateKey.generate(),
|
||||
x25519.X25519PrivateKey.generate().public_key()
|
||||
)
|
||||
|
||||
# Measure decryption times
|
||||
times = []
|
||||
for ciphertext in [small_cipher, large_cipher]:
|
||||
start = time.perf_counter()
|
||||
try:
|
||||
decrypt_data(
|
||||
ciphertext,
|
||||
x25519.X25519PrivateKey.generate(),
|
||||
x25519.X25519PrivateKey.generate().public_key()
|
||||
)
|
||||
except:
|
||||
pass # Expected to fail with wrong keys
|
||||
end = time.perf_counter()
|
||||
times.append(end - start)
|
||||
|
||||
# Times should be similar (within 10%)
|
||||
time_diff = abs(times[0] - times[1]) / max(times)
|
||||
assert time_diff < 0.1, f"Timing difference too large: {time_diff}"
|
||||
|
||||
def test_zero_knowledge_proof_integration(self):
|
||||
"""Test ZK proof integration for privacy"""
|
||||
from apps.zk_circuits import generate_proof, verify_proof
|
||||
|
||||
# Create confidential transaction
|
||||
transaction = {
|
||||
"input_commitment": "commitment123",
|
||||
"output_commitment": "commitment456",
|
||||
"amount": 1000,
|
||||
}
|
||||
|
||||
# Generate ZK proof
|
||||
with patch('apps.zk_circuits.generate_proof') as mock_generate:
|
||||
mock_generate.return_value = {
|
||||
"proof": "zk_proof_here",
|
||||
"inputs": ["hash1", "hash2"],
|
||||
}
|
||||
|
||||
proof_data = mock_generate(transaction)
|
||||
|
||||
# Verify proof structure
|
||||
assert "proof" in proof_data
|
||||
assert "inputs" in proof_data
|
||||
assert len(proof_data["inputs"]) == 2
|
||||
|
||||
# Verify proof
|
||||
with patch('apps.zk_circuits.verify_proof') as mock_verify:
|
||||
mock_verify.return_value = True
|
||||
|
||||
is_valid = mock_verify(
|
||||
proof=proof_data["proof"],
|
||||
inputs=proof_data["inputs"]
|
||||
)
|
||||
|
||||
assert is_valid is True
|
||||
|
||||
def test_audit_log_integrity(self, confidential_service):
|
||||
"""Test that audit logs maintain integrity"""
|
||||
# Create confidential transaction
|
||||
tx = ConfidentialTransaction(
|
||||
id="audit-tx-123",
|
||||
ciphertext="encrypted_data",
|
||||
sender_key="sender_key",
|
||||
receiver_key="receiver_key",
|
||||
created_at=datetime.utcnow(),
|
||||
)
|
||||
|
||||
# Log access
|
||||
access_log = confidential_service.log_access(
|
||||
transaction_id=tx.id,
|
||||
user_id="auditor-123",
|
||||
action="view_with_viewing_key",
|
||||
timestamp=datetime.utcnow()
|
||||
)
|
||||
|
||||
# Verify log integrity
|
||||
assert "log_id" in access_log
|
||||
assert "hash" in access_log
|
||||
assert "signature" in access_log
|
||||
|
||||
# Verify log can't be tampered
|
||||
original_hash = access_log["hash"]
|
||||
access_log["user_id"] = "malicious-user"
|
||||
|
||||
# Recalculate hash should differ
|
||||
new_hash = confidential_service.calculate_log_hash(access_log)
|
||||
assert new_hash != original_hash
|
||||
|
||||
def test_hsm_integration_security(self):
|
||||
"""Test HSM integration for key management"""
|
||||
from apps.coordinator_api.src.app.services.hsm_service import HSMService
|
||||
|
||||
# Mock HSM client
|
||||
mock_hsm = Mock()
|
||||
mock_hsm.generate_key.return_value = {"key_id": "hsm-key-123"}
|
||||
mock_hsm.sign_data.return_value = {"signature": "hsm-signature"}
|
||||
mock_hsm.encrypt.return_value = {"ciphertext": "hsm-encrypted"}
|
||||
|
||||
with patch('apps.coordinator_api.src.app.services.hsm_service.HSMClient') as mock_client:
|
||||
mock_client.return_value = mock_hsm
|
||||
|
||||
hsm_service = HSMService()
|
||||
|
||||
# Test key generation
|
||||
key_result = hsm_service.generate_key(
|
||||
key_type="encryption",
|
||||
purpose="confidential_tx"
|
||||
)
|
||||
assert key_result["key_id"] == "hsm-key-123"
|
||||
|
||||
# Test signing
|
||||
sign_result = hsm_service.sign_data(
|
||||
key_id="hsm-key-123",
|
||||
data="transaction_data"
|
||||
)
|
||||
assert "signature" in sign_result
|
||||
|
||||
# Verify HSM was called
|
||||
mock_hsm.generate_key.assert_called_once()
|
||||
mock_hsm.sign_data.assert_called_once()
|
||||
|
||||
def test_multi_party_computation(self):
|
||||
"""Test MPC for transaction validation"""
|
||||
from apps.coordinator_api.src.app.services.mpc_service import MPCService
|
||||
|
||||
mpc_service = MPCService()
|
||||
|
||||
# Create transaction shares
|
||||
transaction = {
|
||||
"amount": 1000,
|
||||
"sender": "0x123",
|
||||
"receiver": "0x456",
|
||||
}
|
||||
|
||||
# Generate shares
|
||||
shares = mpc_service.create_shares(transaction, threshold=3, total=5)
|
||||
|
||||
assert len(shares) == 5
|
||||
assert all("share_id" in share for share in shares)
|
||||
assert all("encrypted_data" in share for share in shares)
|
||||
|
||||
# Test reconstruction with sufficient shares
|
||||
selected_shares = shares[:3]
|
||||
reconstructed = mpc_service.reconstruct_transaction(selected_shares)
|
||||
|
||||
assert reconstructed["amount"] == transaction["amount"]
|
||||
assert reconstructed["sender"] == transaction["sender"]
|
||||
|
||||
# Test insufficient shares fail
|
||||
with pytest.raises(ValueError):
|
||||
mpc_service.reconstruct_transaction(shares[:2])
|
||||
|
||||
def test_forward_secrecy(self):
|
||||
"""Test forward secrecy of confidential transactions"""
|
||||
# Generate ephemeral keys
|
||||
ephemeral_private = x25519.X25519PrivateKey.generate()
|
||||
ephemeral_public = ephemeral_private.public_key()
|
||||
|
||||
receiver_private = x25519.X25519PrivateKey.generate()
|
||||
receiver_public = receiver_private.public_key()
|
||||
|
||||
# Create shared secret
|
||||
shared_secret = ephemeral_private.exchange(receiver_public)
|
||||
|
||||
# Derive encryption key
|
||||
derived_key = HKDF(
|
||||
algorithm=hashes.SHA256(),
|
||||
length=32,
|
||||
salt=None,
|
||||
info=b"aitbc-confidential-tx",
|
||||
).derive(shared_secret)
|
||||
|
||||
# Encrypt transaction
|
||||
aesgcm = AESGCM(derived_key)
|
||||
nonce = AESGCM.generate_nonce(12)
|
||||
transaction_data = json.dumps({"amount": 1000})
|
||||
ciphertext = aesgcm.encrypt(nonce, transaction_data.encode(), None)
|
||||
|
||||
# Even if ephemeral key is compromised later, past transactions remain secure
|
||||
# because the shared secret is not stored
|
||||
|
||||
# Verify decryption works with current keys
|
||||
aesgcm_decrypt = AESGCM(derived_key)
|
||||
decrypted = aesgcm_decrypt.decrypt(nonce, ciphertext, None)
|
||||
assert json.loads(decrypted) == {"amount": 1000}
|
||||
|
||||
def test_deniable_encryption(self):
|
||||
"""Test deniable encryption for plausible deniability"""
|
||||
from apps.coordinator_api.src.app.services.deniable_service import DeniableEncryption
|
||||
|
||||
deniable = DeniableEncryption()
|
||||
|
||||
# Create two plausible messages
|
||||
real_message = {"amount": 1000000, "asset": "USDC"}
|
||||
fake_message = {"amount": 100, "asset": "USDC"}
|
||||
|
||||
# Generate deniable ciphertext
|
||||
result = deniable.encrypt(
|
||||
real_message=real_message,
|
||||
fake_message=fake_message,
|
||||
receiver_key=x25519.X25519PrivateKey.generate()
|
||||
)
|
||||
|
||||
assert "ciphertext" in result
|
||||
assert "real_key" in result
|
||||
assert "fake_key" in result
|
||||
|
||||
# Can reveal either message depending on key provided
|
||||
real_decrypted = deniable.decrypt(
|
||||
ciphertext=result["ciphertext"],
|
||||
key=result["real_key"]
|
||||
)
|
||||
assert json.loads(real_decrypted) == real_message
|
||||
|
||||
fake_decrypted = deniable.decrypt(
|
||||
ciphertext=result["ciphertext"],
|
||||
key=result["fake_key"]
|
||||
)
|
||||
assert json.loads(fake_decrypted) == fake_message
|
||||
|
||||
|
||||
@pytest.mark.security
|
||||
class TestConfidentialTransactionVulnerabilities:
|
||||
"""Test for potential vulnerabilities in confidential transactions"""
|
||||
|
||||
def test_timing_attack_prevention(self):
|
||||
"""Test prevention of timing attacks on amount comparison"""
|
||||
import time
|
||||
import statistics
|
||||
|
||||
# Create various transaction amounts
|
||||
amounts = [1, 100, 1000, 10000, 100000, 1000000]
|
||||
|
||||
encryption_times = []
|
||||
|
||||
for amount in amounts:
|
||||
transaction = {"amount": amount}
|
||||
|
||||
# Measure encryption time
|
||||
start = time.perf_counter_ns()
|
||||
ciphertext = encrypt_data(
|
||||
json.dumps(transaction),
|
||||
x25519.X25519PrivateKey.generate(),
|
||||
x25519.X25519PrivateKey.generate().public_key()
|
||||
)
|
||||
end = time.perf_counter_ns()
|
||||
|
||||
encryption_times.append(end - start)
|
||||
|
||||
# Check if encryption time correlates with amount
|
||||
correlation = statistics.correlation(amounts, encryption_times)
|
||||
assert abs(correlation) < 0.1, f"Timing correlation detected: {correlation}"
|
||||
|
||||
def test_memory_sanitization(self):
|
||||
"""Test that sensitive memory is properly sanitized"""
|
||||
import gc
|
||||
import sys
|
||||
|
||||
# Create confidential transaction
|
||||
sensitive_data = "secret_transaction_data_12345"
|
||||
|
||||
# Encrypt data
|
||||
ciphertext = encrypt_data(
|
||||
sensitive_data,
|
||||
x25519.X25519PrivateKey.generate(),
|
||||
x25519.X25519PrivateKey.generate().public_key()
|
||||
)
|
||||
|
||||
# Force garbage collection
|
||||
del sensitive_data
|
||||
gc.collect()
|
||||
|
||||
# Check if sensitive data still exists in memory
|
||||
memory_dump = str(sys.getsizeof(ciphertext))
|
||||
assert "secret_transaction_data_12345" not in memory_dump
|
||||
|
||||
def test_key_derivation_security(self):
|
||||
"""Test security of key derivation functions"""
|
||||
from cryptography.hazmat.primitives.kdf.hkdf import HKDF
|
||||
from cryptography.hazmat.primitives import hashes
|
||||
|
||||
# Test with different salts
|
||||
base_key = b"base_key_material"
|
||||
salt1 = b"salt_1"
|
||||
salt2 = b"salt_2"
|
||||
|
||||
kdf1 = HKDF(
|
||||
algorithm=hashes.SHA256(),
|
||||
length=32,
|
||||
salt=salt1,
|
||||
info=b"aitbc-key-derivation",
|
||||
)
|
||||
|
||||
kdf2 = HKDF(
|
||||
algorithm=hashes.SHA256(),
|
||||
length=32,
|
||||
salt=salt2,
|
||||
info=b"aitbc-key-derivation",
|
||||
)
|
||||
|
||||
key1 = kdf1.derive(base_key)
|
||||
key2 = kdf2.derive(base_key)
|
||||
|
||||
# Different salts should produce different keys
|
||||
assert key1 != key2
|
||||
|
||||
# Keys should be sufficiently random
|
||||
# Test by checking bit distribution
|
||||
bit_count = sum(bin(byte).count('1') for byte in key1)
|
||||
bit_ratio = bit_count / (len(key1) * 8)
|
||||
assert 0.45 < bit_ratio < 0.55, "Key bits not evenly distributed"
|
||||
|
||||
def test_side_channel_leakage_prevention(self):
|
||||
"""Test prevention of various side channel attacks"""
|
||||
import psutil
|
||||
import os
|
||||
|
||||
# Monitor resource usage during encryption
|
||||
process = psutil.Process(os.getpid())
|
||||
|
||||
# Baseline measurements
|
||||
baseline_cpu = process.cpu_percent()
|
||||
baseline_memory = process.memory_info().rss
|
||||
|
||||
# Perform encryption operations
|
||||
for i in range(100):
|
||||
data = f"transaction_data_{i}"
|
||||
encrypt_data(
|
||||
data,
|
||||
x25519.X25519PrivateKey.generate(),
|
||||
x25519.X25519PrivateKey.generate().public_key()
|
||||
)
|
||||
|
||||
# Check for unusual resource usage patterns
|
||||
final_cpu = process.cpu_percent()
|
||||
final_memory = process.memory_info().rss
|
||||
|
||||
cpu_increase = final_cpu - baseline_cpu
|
||||
memory_increase = final_memory - baseline_memory
|
||||
|
||||
# Resource usage should be consistent
|
||||
assert cpu_increase < 50, f"Excessive CPU usage: {cpu_increase}%"
|
||||
assert memory_increase < 100 * 1024 * 1024, f"Excessive memory usage: {memory_increase} bytes"
|
||||
|
||||
def test_quantum_resistance_preparation(self):
|
||||
"""Test preparation for quantum-resistant cryptography"""
|
||||
# Test post-quantum key exchange simulation
|
||||
from apps.coordinator_api.src.app.services.pqc_service import PostQuantumCrypto
|
||||
|
||||
pqc = PostQuantumCrypto()
|
||||
|
||||
# Generate quantum-resistant key pair
|
||||
key_pair = pqc.generate_keypair(algorithm="kyber768")
|
||||
|
||||
assert "private_key" in key_pair
|
||||
assert "public_key" in key_pair
|
||||
assert "algorithm" in key_pair
|
||||
assert key_pair["algorithm"] == "kyber768"
|
||||
|
||||
# Test quantum-resistant signature
|
||||
message = "confidential_transaction_hash"
|
||||
signature = pqc.sign(
|
||||
message=message,
|
||||
private_key=key_pair["private_key"],
|
||||
algorithm="dilithium3"
|
||||
)
|
||||
|
||||
assert "signature" in signature
|
||||
assert "algorithm" in signature
|
||||
|
||||
# Verify signature
|
||||
is_valid = pqc.verify(
|
||||
message=message,
|
||||
signature=signature["signature"],
|
||||
public_key=key_pair["public_key"],
|
||||
algorithm="dilithium3"
|
||||
)
|
||||
|
||||
assert is_valid is True
|
||||
|
||||
|
||||
@pytest.mark.security
|
||||
class TestConfidentialTransactionCompliance:
|
||||
"""Test compliance features for confidential transactions"""
|
||||
|
||||
def test_regulatory_reporting(self, confidential_service):
|
||||
"""Test regulatory reporting while maintaining privacy"""
|
||||
# Create confidential transaction
|
||||
tx = ConfidentialTransaction(
|
||||
id="regulatory-tx-123",
|
||||
ciphertext="encrypted_data",
|
||||
sender_key="sender_key",
|
||||
receiver_key="receiver_key",
|
||||
created_at=datetime.utcnow(),
|
||||
)
|
||||
|
||||
# Generate regulatory report
|
||||
report = confidential_service.generate_regulatory_report(
|
||||
transaction_id=tx.id,
|
||||
reporting_fields=["timestamp", "asset_type", "jurisdiction"],
|
||||
viewing_authority="financial_authority_123"
|
||||
)
|
||||
|
||||
# Report should contain required fields but not private data
|
||||
assert "transaction_id" in report
|
||||
assert "timestamp" in report
|
||||
assert "asset_type" in report
|
||||
assert "jurisdiction" in report
|
||||
assert "amount" not in report # Should remain confidential
|
||||
assert "sender" not in report # Should remain confidential
|
||||
assert "receiver" not in report # Should remain confidential
|
||||
|
||||
def test_kyc_aml_integration(self, confidential_service):
|
||||
"""Test KYC/AML checks without compromising privacy"""
|
||||
# Create transaction with encrypted parties
|
||||
encrypted_parties = {
|
||||
"sender": "encrypted_sender_data",
|
||||
"receiver": "encrypted_receiver_data",
|
||||
}
|
||||
|
||||
# Perform KYC/AML check
|
||||
with patch('apps.coordinator_api.src.app.services.aml_service.check_parties') as mock_aml:
|
||||
mock_aml.return_value = {
|
||||
"sender_status": "cleared",
|
||||
"receiver_status": "cleared",
|
||||
"risk_score": 0.2,
|
||||
}
|
||||
|
||||
aml_result = confidential_service.perform_aml_check(
|
||||
encrypted_parties=encrypted_parties,
|
||||
viewing_permission="regulatory_only"
|
||||
)
|
||||
|
||||
assert aml_result["sender_status"] == "cleared"
|
||||
assert aml_result["risk_score"] < 0.5
|
||||
|
||||
# Verify parties remain encrypted
|
||||
assert "sender_address" not in aml_result
|
||||
assert "receiver_address" not in aml_result
|
||||
|
||||
def test_audit_trail_privacy(self, confidential_service):
|
||||
"""Test audit trail that preserves privacy"""
|
||||
# Create series of confidential transactions
|
||||
transactions = [
|
||||
{"id": f"tx-{i}", "amount": 1000 * i}
|
||||
for i in range(10)
|
||||
]
|
||||
|
||||
# Generate privacy-preserving audit trail
|
||||
audit_trail = confidential_service.generate_audit_trail(
|
||||
transactions=transactions,
|
||||
privacy_level="high",
|
||||
auditor_id="auditor_123"
|
||||
)
|
||||
|
||||
# Audit trail should have:
|
||||
assert "transaction_count" in audit_trail
|
||||
assert "total_volume" in audit_trail
|
||||
assert "time_range" in audit_trail
|
||||
assert "compliance_hash" in audit_trail
|
||||
|
||||
# But should not have:
|
||||
assert "transaction_ids" not in audit_trail
|
||||
assert "individual_amounts" not in audit_trail
|
||||
assert "party_addresses" not in audit_trail
|
||||
|
||||
def test_data_retention_policy(self, confidential_service):
|
||||
"""Test data retention and automatic deletion"""
|
||||
# Create old confidential transaction
|
||||
old_tx = ConfidentialTransaction(
|
||||
id="old-tx-123",
|
||||
ciphertext="old_encrypted_data",
|
||||
created_at=datetime.utcnow() - timedelta(days=400), # Over 1 year
|
||||
)
|
||||
|
||||
# Test retention policy enforcement
|
||||
with patch('apps.coordinator_api.src.app.services.retention_service.check_retention') as mock_check:
|
||||
mock_check.return_value = {"should_delete": True, "reason": "expired"}
|
||||
|
||||
deletion_result = confidential_service.enforce_retention_policy(
|
||||
transaction_id=old_tx.id,
|
||||
policy_duration_days=365
|
||||
)
|
||||
|
||||
assert deletion_result["deleted"] is True
|
||||
assert "deletion_timestamp" in deletion_result
|
||||
assert "compliance_log" in deletion_result
|
||||
531
tests/unit/test_coordinator_api.py
Normal file
531
tests/unit/test_coordinator_api.py
Normal file
@ -0,0 +1,531 @@
|
||||
"""
|
||||
Unit tests for AITBC Coordinator API
|
||||
"""
|
||||
|
||||
import pytest
|
||||
import json
|
||||
from datetime import datetime, timedelta
|
||||
from unittest.mock import Mock, patch, AsyncMock
|
||||
from fastapi.testclient import TestClient
|
||||
|
||||
from apps.coordinator_api.src.app.main import app
|
||||
from apps.coordinator_api.src.app.models.job import Job, JobStatus
|
||||
from apps.coordinator_api.src.app.models.receipt import JobReceipt
|
||||
from apps.coordinator_api.src.app.services.job_service import JobService
|
||||
from apps.coordinator_api.src.app.services.receipt_service import ReceiptService
|
||||
from apps.coordinator_api.src.app.exceptions import JobError, ValidationError
|
||||
|
||||
|
||||
@pytest.mark.unit
|
||||
class TestJobEndpoints:
|
||||
"""Test job-related endpoints"""
|
||||
|
||||
def test_create_job_success(self, coordinator_client, sample_job_data, sample_tenant):
|
||||
"""Test successful job creation"""
|
||||
response = coordinator_client.post(
|
||||
"/v1/jobs",
|
||||
json=sample_job_data,
|
||||
headers={"X-Tenant-ID": sample_tenant.id}
|
||||
)
|
||||
|
||||
assert response.status_code == 201
|
||||
data = response.json()
|
||||
assert data["id"] is not None
|
||||
assert data["status"] == "pending"
|
||||
assert data["job_type"] == sample_job_data["job_type"]
|
||||
assert data["tenant_id"] == sample_tenant.id
|
||||
|
||||
def test_create_job_invalid_data(self, coordinator_client):
|
||||
"""Test job creation with invalid data"""
|
||||
invalid_data = {
|
||||
"job_type": "invalid_type",
|
||||
"parameters": {},
|
||||
}
|
||||
|
||||
response = coordinator_client.post("/v1/jobs", json=invalid_data)
|
||||
assert response.status_code == 422
|
||||
assert "detail" in response.json()
|
||||
|
||||
def test_create_job_unauthorized(self, coordinator_client, sample_job_data):
|
||||
"""Test job creation without tenant ID"""
|
||||
response = coordinator_client.post("/v1/jobs", json=sample_job_data)
|
||||
assert response.status_code == 401
|
||||
|
||||
def test_get_job_success(self, coordinator_client, sample_job_data, sample_tenant):
|
||||
"""Test successful job retrieval"""
|
||||
# Create a job first
|
||||
create_response = coordinator_client.post(
|
||||
"/v1/jobs",
|
||||
json=sample_job_data,
|
||||
headers={"X-Tenant-ID": sample_tenant.id}
|
||||
)
|
||||
job_id = create_response.json()["id"]
|
||||
|
||||
# Retrieve the job
|
||||
response = coordinator_client.get(
|
||||
f"/v1/jobs/{job_id}",
|
||||
headers={"X-Tenant-ID": sample_tenant.id}
|
||||
)
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert data["id"] == job_id
|
||||
assert data["job_type"] == sample_job_data["job_type"]
|
||||
|
||||
def test_get_job_not_found(self, coordinator_client, sample_tenant):
|
||||
"""Test retrieving non-existent job"""
|
||||
response = coordinator_client.get(
|
||||
"/v1/jobs/non-existent",
|
||||
headers={"X-Tenant-ID": sample_tenant.id}
|
||||
)
|
||||
assert response.status_code == 404
|
||||
|
||||
def test_list_jobs_success(self, coordinator_client, sample_job_data, sample_tenant):
|
||||
"""Test successful job listing"""
|
||||
# Create multiple jobs
|
||||
for i in range(5):
|
||||
coordinator_client.post(
|
||||
"/v1/jobs",
|
||||
json=sample_job_data,
|
||||
headers={"X-Tenant-ID": sample_tenant.id}
|
||||
)
|
||||
|
||||
# List jobs
|
||||
response = coordinator_client.get(
|
||||
"/v1/jobs",
|
||||
headers={"X-Tenant-ID": sample_tenant.id}
|
||||
)
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert "items" in data
|
||||
assert len(data["items"]) >= 5
|
||||
assert "total" in data
|
||||
assert "page" in data
|
||||
|
||||
def test_list_jobs_with_filters(self, coordinator_client, sample_job_data, sample_tenant):
|
||||
"""Test job listing with filters"""
|
||||
# Create jobs with different statuses
|
||||
coordinator_client.post(
|
||||
"/v1/jobs",
|
||||
json={**sample_job_data, "priority": "high"},
|
||||
headers={"X-Tenant-ID": sample_tenant.id}
|
||||
)
|
||||
|
||||
# Filter by priority
|
||||
response = coordinator_client.get(
|
||||
"/v1/jobs?priority=high",
|
||||
headers={"X-Tenant-ID": sample_tenant.id}
|
||||
)
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert all(job["priority"] == "high" for job in data["items"])
|
||||
|
||||
def test_cancel_job_success(self, coordinator_client, sample_job_data, sample_tenant):
|
||||
"""Test successful job cancellation"""
|
||||
# Create a job
|
||||
create_response = coordinator_client.post(
|
||||
"/v1/jobs",
|
||||
json=sample_job_data,
|
||||
headers={"X-Tenant-ID": sample_tenant.id}
|
||||
)
|
||||
job_id = create_response.json()["id"]
|
||||
|
||||
# Cancel the job
|
||||
response = coordinator_client.patch(
|
||||
f"/v1/jobs/{job_id}/cancel",
|
||||
headers={"X-Tenant-ID": sample_tenant.id}
|
||||
)
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert data["status"] == "cancelled"
|
||||
|
||||
def test_cancel_completed_job(self, coordinator_client, sample_job_data, sample_tenant):
|
||||
"""Test cancelling a completed job"""
|
||||
# Create and complete a job
|
||||
create_response = coordinator_client.post(
|
||||
"/v1/jobs",
|
||||
json=sample_job_data,
|
||||
headers={"X-Tenant-ID": sample_tenant.id}
|
||||
)
|
||||
job_id = create_response.json()["id"]
|
||||
|
||||
# Mark as completed
|
||||
coordinator_client.patch(
|
||||
f"/v1/jobs/{job_id}",
|
||||
json={"status": "completed"},
|
||||
headers={"X-Tenant-ID": sample_tenant.id}
|
||||
)
|
||||
|
||||
# Try to cancel
|
||||
response = coordinator_client.patch(
|
||||
f"/v1/jobs/{job_id}/cancel",
|
||||
headers={"X-Tenant-ID": sample_tenant.id}
|
||||
)
|
||||
|
||||
assert response.status_code == 400
|
||||
assert "cannot be cancelled" in response.json()["detail"].lower()
|
||||
|
||||
|
||||
@pytest.mark.unit
|
||||
class TestReceiptEndpoints:
|
||||
"""Test receipt-related endpoints"""
|
||||
|
||||
def test_get_receipts_success(self, coordinator_client, sample_job_data, sample_tenant, signed_receipt):
|
||||
"""Test successful receipt retrieval"""
|
||||
# Create a job
|
||||
create_response = coordinator_client.post(
|
||||
"/v1/jobs",
|
||||
json=sample_job_data,
|
||||
headers={"X-Tenant-ID": sample_tenant.id}
|
||||
)
|
||||
job_id = create_response.json()["id"]
|
||||
|
||||
# Mock receipt storage
|
||||
with patch('apps.coordinator_api.src.app.services.receipt_service.ReceiptService.get_job_receipts') as mock_get:
|
||||
mock_get.return_value = [signed_receipt]
|
||||
|
||||
response = coordinator_client.get(
|
||||
f"/v1/jobs/{job_id}/receipts",
|
||||
headers={"X-Tenant-ID": sample_tenant.id}
|
||||
)
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert "items" in data
|
||||
assert len(data["items"]) > 0
|
||||
assert "signature" in data["items"][0]
|
||||
|
||||
def test_verify_receipt_success(self, coordinator_client, signed_receipt):
|
||||
"""Test successful receipt verification"""
|
||||
with patch('apps.coordinator_api.src.app.services.receipt_service.verify_receipt') as mock_verify:
|
||||
mock_verify.return_value = {"valid": True}
|
||||
|
||||
response = coordinator_client.post(
|
||||
"/v1/receipts/verify",
|
||||
json={"receipt": signed_receipt}
|
||||
)
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert data["valid"] is True
|
||||
|
||||
def test_verify_receipt_invalid(self, coordinator_client):
|
||||
"""Test verification of invalid receipt"""
|
||||
invalid_receipt = {
|
||||
"job_id": "test",
|
||||
"signature": "invalid"
|
||||
}
|
||||
|
||||
with patch('apps.coordinator_api.src.app.services.receipt_service.verify_receipt') as mock_verify:
|
||||
mock_verify.return_value = {"valid": False, "error": "Invalid signature"}
|
||||
|
||||
response = coordinator_client.post(
|
||||
"/v1/receipts/verify",
|
||||
json={"receipt": invalid_receipt}
|
||||
)
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert data["valid"] is False
|
||||
assert "error" in data
|
||||
|
||||
|
||||
@pytest.mark.unit
|
||||
class TestMinerEndpoints:
|
||||
"""Test miner-related endpoints"""
|
||||
|
||||
def test_register_miner_success(self, coordinator_client, sample_tenant):
|
||||
"""Test successful miner registration"""
|
||||
miner_data = {
|
||||
"miner_id": "test-miner-123",
|
||||
"endpoint": "http://localhost:9000",
|
||||
"capabilities": ["ai_inference", "image_generation"],
|
||||
"resources": {
|
||||
"gpu_memory": "16GB",
|
||||
"cpu_cores": 8,
|
||||
}
|
||||
}
|
||||
|
||||
response = coordinator_client.post(
|
||||
"/v1/miners/register",
|
||||
json=miner_data,
|
||||
headers={"X-Tenant-ID": sample_tenant.id}
|
||||
)
|
||||
|
||||
assert response.status_code == 201
|
||||
data = response.json()
|
||||
assert data["miner_id"] == miner_data["miner_id"]
|
||||
assert data["status"] == "active"
|
||||
|
||||
def test_miner_heartbeat_success(self, coordinator_client, sample_tenant):
|
||||
"""Test successful miner heartbeat"""
|
||||
heartbeat_data = {
|
||||
"miner_id": "test-miner-123",
|
||||
"status": "active",
|
||||
"current_jobs": 2,
|
||||
"resources_used": {
|
||||
"gpu_memory": "8GB",
|
||||
"cpu_cores": 4,
|
||||
}
|
||||
}
|
||||
|
||||
with patch('apps.coordinator_api.src.app.services.miner_service.MinerService.update_heartbeat') as mock_heartbeat:
|
||||
mock_heartbeat.return_value = {"updated": True}
|
||||
|
||||
response = coordinator_client.post(
|
||||
"/v1/miners/heartbeat",
|
||||
json=heartbeat_data,
|
||||
headers={"X-Tenant-ID": sample_tenant.id}
|
||||
)
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert data["updated"] is True
|
||||
|
||||
def test_fetch_jobs_success(self, coordinator_client, sample_tenant):
|
||||
"""Test successful job fetching by miner"""
|
||||
with patch('apps.coordinator_api.src.app.services.job_service.JobService.get_available_jobs') as mock_fetch:
|
||||
mock_fetch.return_value = [
|
||||
{
|
||||
"id": "job-123",
|
||||
"job_type": "ai_inference",
|
||||
"requirements": {"gpu_memory": "8GB"}
|
||||
}
|
||||
]
|
||||
|
||||
response = coordinator_client.get(
|
||||
"/v1/miners/jobs",
|
||||
headers={"X-Tenant-ID": sample_tenant.id}
|
||||
)
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert isinstance(data, list)
|
||||
assert len(data) > 0
|
||||
|
||||
|
||||
@pytest.mark.unit
|
||||
class TestMarketplaceEndpoints:
|
||||
"""Test marketplace-related endpoints"""
|
||||
|
||||
def test_create_offer_success(self, coordinator_client, sample_tenant):
|
||||
"""Test successful offer creation"""
|
||||
offer_data = {
|
||||
"service_type": "ai_inference",
|
||||
"pricing": {
|
||||
"per_hour": 0.50,
|
||||
"per_token": 0.0001,
|
||||
},
|
||||
"capacity": 100,
|
||||
"requirements": {
|
||||
"gpu_memory": "16GB",
|
||||
}
|
||||
}
|
||||
|
||||
response = coordinator_client.post(
|
||||
"/v1/marketplace/offers",
|
||||
json=offer_data,
|
||||
headers={"X-Tenant-ID": sample_tenant.id}
|
||||
)
|
||||
|
||||
assert response.status_code == 201
|
||||
data = response.json()
|
||||
assert data["id"] is not None
|
||||
assert data["service_type"] == offer_data["service_type"]
|
||||
|
||||
def test_list_offers_success(self, coordinator_client, sample_tenant):
|
||||
"""Test successful offer listing"""
|
||||
response = coordinator_client.get(
|
||||
"/v1/marketplace/offers",
|
||||
headers={"X-Tenant-ID": sample_tenant.id}
|
||||
)
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert "items" in data
|
||||
assert isinstance(data["items"], list)
|
||||
|
||||
def test_create_bid_success(self, coordinator_client, sample_tenant):
|
||||
"""Test successful bid creation"""
|
||||
bid_data = {
|
||||
"offer_id": "offer-123",
|
||||
"quantity": 10,
|
||||
"max_price": 1.00,
|
||||
}
|
||||
|
||||
response = coordinator_client.post(
|
||||
"/v1/marketplace/bids",
|
||||
json=bid_data,
|
||||
headers={"X-Tenant-ID": sample_tenant.id}
|
||||
)
|
||||
|
||||
assert response.status_code == 201
|
||||
data = response.json()
|
||||
assert data["id"] is not None
|
||||
assert data["offer_id"] == bid_data["offer_id"]
|
||||
|
||||
|
||||
@pytest.mark.unit
|
||||
class TestMultiTenancy:
|
||||
"""Test multi-tenancy features"""
|
||||
|
||||
def test_tenant_isolation(self, coordinator_client, sample_job_data, sample_tenant):
|
||||
"""Test that tenants cannot access each other's data"""
|
||||
# Create job for tenant A
|
||||
response_a = coordinator_client.post(
|
||||
"/v1/jobs",
|
||||
json=sample_job_data,
|
||||
headers={"X-Tenant-ID": sample_tenant.id}
|
||||
)
|
||||
job_id_a = response_a.json()["id"]
|
||||
|
||||
# Try to access with different tenant ID
|
||||
response = coordinator_client.get(
|
||||
f"/v1/jobs/{job_id_a}",
|
||||
headers={"X-Tenant-ID": "different-tenant"}
|
||||
)
|
||||
|
||||
assert response.status_code == 404
|
||||
|
||||
def test_quota_enforcement(self, coordinator_client, sample_job_data, sample_tenant, sample_tenant_quota):
|
||||
"""Test that quota limits are enforced"""
|
||||
# Mock quota service
|
||||
with patch('apps.coordinator_api.src.app.services.quota_service.QuotaService.check_quota') as mock_check:
|
||||
mock_check.return_value = False
|
||||
|
||||
response = coordinator_client.post(
|
||||
"/v1/jobs",
|
||||
json=sample_job_data,
|
||||
headers={"X-Tenant-ID": sample_tenant.id}
|
||||
)
|
||||
|
||||
assert response.status_code == 429
|
||||
assert "quota" in response.json()["detail"].lower()
|
||||
|
||||
def test_tenant_metrics(self, coordinator_client, sample_tenant):
|
||||
"""Test tenant-specific metrics"""
|
||||
response = coordinator_client.get(
|
||||
"/v1/metrics",
|
||||
headers={"X-Tenant-ID": sample_tenant.id}
|
||||
)
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert "tenant_id" in data
|
||||
assert data["tenant_id"] == sample_tenant.id
|
||||
|
||||
|
||||
@pytest.mark.unit
|
||||
class TestErrorHandling:
|
||||
"""Test error handling and edge cases"""
|
||||
|
||||
def test_validation_errors(self, coordinator_client):
|
||||
"""Test validation error responses"""
|
||||
# Send invalid JSON
|
||||
response = coordinator_client.post(
|
||||
"/v1/jobs",
|
||||
data="invalid json",
|
||||
headers={"Content-Type": "application/json"}
|
||||
)
|
||||
|
||||
assert response.status_code == 422
|
||||
assert "detail" in response.json()
|
||||
|
||||
def test_rate_limiting(self, coordinator_client, sample_tenant):
|
||||
"""Test rate limiting"""
|
||||
with patch('apps.coordinator_api.src.app.middleware.rate_limit.check_rate_limit') as mock_check:
|
||||
mock_check.return_value = False
|
||||
|
||||
response = coordinator_client.get(
|
||||
"/v1/jobs",
|
||||
headers={"X-Tenant-ID": sample_tenant.id}
|
||||
)
|
||||
|
||||
assert response.status_code == 429
|
||||
assert "rate limit" in response.json()["detail"].lower()
|
||||
|
||||
def test_internal_server_error(self, coordinator_client, sample_tenant):
|
||||
"""Test internal server error handling"""
|
||||
with patch('apps.coordinator_api.src.app.services.job_service.JobService.create_job') as mock_create:
|
||||
mock_create.side_effect = Exception("Database error")
|
||||
|
||||
response = coordinator_client.post(
|
||||
"/v1/jobs",
|
||||
json={"job_type": "test"},
|
||||
headers={"X-Tenant-ID": sample_tenant.id}
|
||||
)
|
||||
|
||||
assert response.status_code == 500
|
||||
assert "internal server error" in response.json()["detail"].lower()
|
||||
|
||||
|
||||
@pytest.mark.unit
|
||||
class TestWebhooks:
|
||||
"""Test webhook functionality"""
|
||||
|
||||
def test_webhook_signature_verification(self, coordinator_client):
|
||||
"""Test webhook signature verification"""
|
||||
webhook_data = {
|
||||
"event": "job.completed",
|
||||
"job_id": "test-123",
|
||||
"timestamp": datetime.utcnow().isoformat(),
|
||||
}
|
||||
|
||||
# Mock signature verification
|
||||
with patch('apps.coordinator_api.src.app.webhooks.verify_webhook_signature') as mock_verify:
|
||||
mock_verify.return_value = True
|
||||
|
||||
response = coordinator_client.post(
|
||||
"/v1/webhooks/job-status",
|
||||
json=webhook_data,
|
||||
headers={"X-Webhook-Signature": "test-signature"}
|
||||
)
|
||||
|
||||
assert response.status_code == 200
|
||||
|
||||
def test_webhook_invalid_signature(self, coordinator_client):
|
||||
"""Test webhook with invalid signature"""
|
||||
webhook_data = {"event": "test"}
|
||||
|
||||
with patch('apps.coordinator_api.src.app.webhooks.verify_webhook_signature') as mock_verify:
|
||||
mock_verify.return_value = False
|
||||
|
||||
response = coordinator_client.post(
|
||||
"/v1/webhooks/job-status",
|
||||
json=webhook_data,
|
||||
headers={"X-Webhook-Signature": "invalid"}
|
||||
)
|
||||
|
||||
assert response.status_code == 401
|
||||
|
||||
|
||||
@pytest.mark.unit
|
||||
class TestHealthAndMetrics:
|
||||
"""Test health check and metrics endpoints"""
|
||||
|
||||
def test_health_check(self, coordinator_client):
|
||||
"""Test health check endpoint"""
|
||||
response = coordinator_client.get("/health")
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert "status" in data
|
||||
assert data["status"] == "healthy"
|
||||
|
||||
def test_metrics_endpoint(self, coordinator_client):
|
||||
"""Test Prometheus metrics endpoint"""
|
||||
response = coordinator_client.get("/metrics")
|
||||
|
||||
assert response.status_code == 200
|
||||
assert "text/plain" in response.headers["content-type"]
|
||||
|
||||
def test_readiness_check(self, coordinator_client):
|
||||
"""Test readiness check endpoint"""
|
||||
response = coordinator_client.get("/ready")
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert "ready" in data
|
||||
Reference in New Issue
Block a user