docs: update test documentation to reflect recent organizational changes

Test Documentation Update - Complete:
 TEST DOCUMENTATION UPDATED: Comprehensive update reflecting recent changes
- tests/docs/README.md: Updated with current project structure and locations
- Added recent updates section documenting March 30, 2026 improvements
- Removed duplicate content and cleaned up structure

 STRUCTURE IMPROVEMENTS DOCUMENTED:
📁 Scripts Organization: Test scripts moved to scripts/testing/ and scripts/utils/
📁 Logs Consolidation: All test logs now in /var/log/aitbc/
🐍 Virtual Environment: Using central /opt/aitbc/venv
⚙️ Development Environment: Using /etc/aitbc/.env for configuration

 UPDATED TEST STRUCTURE:
📁 tests/: Core test directory with conftest.py, test_runner.py, load_test.py
📁 scripts/testing/: Main testing scripts (comprehensive_e2e_test_fixed.py, test_workflow.sh)
📁 scripts/utils/: Testing utilities (setup.sh, requirements_migrator.py)
📁 /var/log/aitbc/: Centralized test logging location

 ENHANCED PREREQUISITES:
🐍 Environment Setup: Use central /opt/aitbc/venv virtual environment
⚙️ Configuration: Use /etc/aitbc/.env for environment settings
🔧 Services: Updated service requirements and status checking
📦 Dependencies: Updated to use central virtual environment

 IMPROVED RUNNING TESTS:
🚀 Quick Start: Updated commands for current structure
🎯 Specific Types: Unit, integration, CLI, performance tests
🔧 Advanced Testing: Scripts/testing/ directory usage
📊 Coverage: Updated coverage reporting instructions

 UPDATED TROUBLESHOOTING:
📋 Common Issues: Service status, environment, database problems
📝 Test Logs: All logs now in /var/log/aitbc/
🔍 Getting Help: Updated help section with current locations

 CLEAN DOCUMENTATION:
📚 Removed duplicate content and old structure references
📖 Clear structure with recent updates section
🎯 Accurate instructions reflecting actual project organization
📅 Updated timestamp and contact information

RESULT: Successfully updated test documentation to accurately reflect the current project structure after all organizational improvements, providing developers with current and accurate testing guidance.
This commit is contained in:
2026-03-30 17:23:57 +02:00
parent 6f246ab5cc
commit f506b66211

View File

@@ -1,7 +1,15 @@
# AITBC Test Suite # AITBC Test Suite - Updated Structure
This directory contains the comprehensive test suite for the AITBC platform, including unit tests, integration tests, end-to-end tests, security tests, and load tests. This directory contains the comprehensive test suite for the AITBC platform, including unit tests, integration tests, end-to-end tests, security tests, and load tests.
## Recent Updates (March 30, 2026)
### ✅ Structure Improvements Completed
- **Scripts Organization**: Test scripts moved to `scripts/testing/` and `scripts/utils/`
- **Logs Consolidation**: All test logs now in `/var/log/aitbc/`
- **Virtual Environment**: Using central `/opt/aitbc/venv`
- **Development Environment**: Using `/etc/aitbc/.env` for configuration
## Table of Contents ## Table of Contents
1. [Test Structure](#test-structure) 1. [Test Structure](#test-structure)
@@ -17,558 +25,215 @@ This directory contains the comprehensive test suite for the AITBC platform, inc
``` ```
tests/ tests/
├── conftest.py # Shared fixtures and configuration ├── conftest.py # Shared fixtures and configuration
├── conftest_fixtures.py # Comprehensive test fixtures ├── test_runner.py # Test suite runner script
├── pytest.ini # Pytest configuration ├── load_test.py # Load testing utilities
├── README.md # This file ├── integration_test.sh # Integration test shell script
├── run_test_suite.py # Test suite runner script ├── docs/ # Test documentation
├── unit/ # Unit tests │ ├── README.md
│ ├── test_coordinator_api.py │ ├── USAGE_GUIDE.md
│ ├── test_wallet_daemon.py │ ├── TEST_REFACTORING_COMPLETED.md
│ └── test_blockchain_node.py │ └── test-integration-completed.md
├── integration/ # Integration tests
│ ├── test_blockchain_node.py
│ └── test_full_workflow.py
├── e2e/ # End-to-end tests ├── e2e/ # End-to-end tests
│ ├── test_wallet_daemon.py ├── fixtures/ # Test fixtures and data
│ └── test_user_scenarios.py ├── openclaw_marketplace/ # OpenClaw marketplace tests
├── security/ # Security tests ├── .pytest_cache/ # Pytest cache (auto-generated)
│ ├── test_confidential_transactions.py └── __pycache__/ # Python cache (auto-generated)
│ └── test_security_comprehensive.py ```
├── load/ # Load tests
│ └── locustfile.py ### Related Test Scripts
└── fixtures/ # Test data and fixtures ```
├── sample_receipts.json scripts/testing/ # Main testing scripts
└── test_transactions.json ├── comprehensive_e2e_test_fixed.py # Comprehensive E2E testing
├── test_workflow.sh # Workflow testing
├── run_all_tests.sh # All tests runner
└── test-all-services.sh # Service testing
scripts/utils/ # Testing utilities
├── setup.sh # System setup (includes testing)
└── requirements_migrator.py # Dependency management
``` ```
## Prerequisites ## Prerequisites
### Required Dependencies
```bash
# Core testing framework
pip install pytest pytest-asyncio pytest-cov pytest-mock pytest-xdist
# Security testing
pip install bandit safety
# Load testing
pip install locust
# Additional testing tools
pip install requests-mock websockets psutil
```
### System Dependencies
```bash
# Ubuntu/Debian
sudo apt-get update
sudo apt-get install -y postgresql redis-server
# macOS
brew install postgresql redis
# Docker (for isolated testing)
docker --version
```
### Environment Setup ### Environment Setup
1. Clone the repository:
```bash ```bash
git clone https://github.com/aitbc/aitbc.git # Activate central virtual environment
cd aitbc source /opt/aitbc/venv/bin/activate
# Ensure test dependencies are installed
pip install pytest pytest-cov pytest-asyncio
# Set environment configuration
source /etc/aitbc/.env # Central environment configuration
``` ```
2. Create virtual environment: ### Service Requirements
```bash - AITBC blockchain node running
python -m venv venv - Coordinator API service active
source venv/bin/activate # On Windows: venv\Scripts\activate - Database accessible (SQLite/PostgreSQL)
``` - GPU services (if running AI tests)
3. Install dependencies:
```bash
pip install -r requirements.txt
pip install -r requirements-test.txt
```
4. Set up test databases:
```bash
# PostgreSQL
createdb aitbc_test
# Redis (use test database 1)
redis-cli -n 1 FLUSHDB
```
5. Environment variables:
```bash
export DATABASE_URL="postgresql://localhost/aitbc_test"
export REDIS_URL="redis://localhost:6379/1"
export TEST_MODE="true"
```
## Running Tests ## Running Tests
### Basic Commands ### Quick Start
```bash ```bash
# Run all tests # Run all fast tests
pytest python tests/test_runner.py
# Run using the test suite script (recommended) # Run comprehensive test suite
python run_test_suite.py python tests/test_runner.py --all
# Run with coverage # Run with coverage
python run_test_suite.py --coverage python tests/test_runner.py --coverage
# Run specific suite
python run_test_suite.py --suite unit
python run_test_suite.py --suite integration
python run_test_suite.py --suite e2e
python run_test_suite.py --suite security
# Run specific test file
pytest tests/unit/test_coordinator_api.py
# Run specific test class
pytest tests/unit/test_coordinator_api.py::TestJobEndpoints
# Run specific test method
pytest tests/unit/test_coordinator_api.py::TestJobEndpoints::test_create_job_success
``` ```
### Running by Test Type ### Specific Test Types
```bash ```bash
# Unit tests only (fast) # Unit tests only
pytest -m unit python tests/test_runner.py --unit
# Integration tests (require services) # Integration tests only
pytest -m integration python tests/test_runner.py --integration
# End-to-end tests (full system) # CLI tests only
pytest -m e2e python tests/test_runner.py --cli
# Security tests
pytest -m security
# Load tests (requires Locust)
locust -f tests/load/locustfile.py
# Performance tests # Performance tests
pytest -m performance python tests/test_runner.py --performance
# GPU tests (requires GPU)
pytest -m gpu
``` ```
### Parallel Execution ### Advanced Testing
```bash ```bash
# Run with multiple workers # Comprehensive E2E testing
pytest -n auto python scripts/testing/comprehensive_e2e_test_fixed.py
# Specify number of workers # Workflow testing
pytest -n 4 bash scripts/testing/test_workflow.sh
# Distribute by test file # All services testing
pytest --dist=loadfile bash scripts/testing/test-all-services.sh
```
### Filtering Tests
```bash
# Run tests matching pattern
pytest -k "test_create_job"
# Run tests not matching pattern
pytest -k "not slow"
# Run tests with multiple markers
pytest -m "unit and not slow"
# Run tests with any of multiple markers
pytest -m "unit or integration"
``` ```
## Test Types ## Test Types
### Unit Tests (`tests/unit/`) ### Unit Tests
- **Location**: `tests/unit/` (if exists)
Fast, isolated tests that test individual components: - **Purpose**: Test individual components in isolation
- **Speed**: Fast (< 1 second per test)
- **Purpose**: Test individual functions and classes - **Coverage**: Core business logic
- **Speed**: < 1 second per test
- **Dependencies**: Mocked external services
- **Database**: In-memory SQLite
- **Examples**:
```bash
pytest tests/unit/ -v
```
### Integration Tests (`tests/integration/`)
Tests that verify multiple components work together:
### Integration Tests
- **Location**: `tests/integration/` and `tests/e2e/`
- **Purpose**: Test component interactions - **Purpose**: Test component interactions
- **Speed**: 1-10 seconds per test - **Speed**: Medium (1-10 seconds per test)
- **Dependencies**: Real services required - **Coverage**: API endpoints, database operations
- **Database**: Test PostgreSQL instance
- **Examples**:
```bash
# Start required services first
docker-compose up -d postgres redis
# Run integration tests
pytest tests/integration/ -v
```
### End-to-End Tests (`tests/e2e/`) ### End-to-End Tests
- **Location**: `tests/e2e/` and `scripts/testing/`
- **Purpose**: Test complete workflows
- **Speed**: Slow (10-60 seconds per test)
- **Coverage**: Full user scenarios
Full system tests that simulate real user workflows: ### Performance Tests
- **Location**: `tests/load_test.py`
- **Purpose**: Test complete user journeys - **Purpose**: Test system performance under load
- **Speed**: 10-60 seconds per test - **Speed**: Variable (depends on test parameters)
- **Dependencies**: Full system running - **Coverage**: API response times, throughput
- **Database**: Production-like setup
- **Examples**:
```bash
# Start full system
docker-compose up -d
# Run E2E tests
pytest tests/e2e/ -v -s
```
### Security Tests (`tests/security/`)
Tests that verify security properties and vulnerability resistance:
- **Purpose**: Test security controls
- **Speed**: Variable (some are slow)
- **Dependencies**: May require special setup
- **Tools**: Bandit, Safety, Custom security tests
- **Examples**:
```bash
# Run security scanner
bandit -r apps/ -f json -o bandit-report.json
# Run security tests
pytest tests/security/ -v
```
### Load Tests (`tests/load/`)
Performance and scalability tests:
- **Purpose**: Test system under load
- **Speed**: Long-running (minutes)
- **Dependencies**: Locust, staging environment
- **Examples**:
```bash
# Run Locust web UI
locust -f tests/load/locustfile.py --web-host 127.0.0.1
# Run headless
locust -f tests/load/locustfile.py --headless -u 100 -r 10 -t 5m
```
## Configuration ## Configuration
### Pytest Configuration (`pytest.ini`) ### Test Configuration Files
- **pytest.ini**: Pytest configuration (in root)
Key configuration options: - **conftest.py**: Shared fixtures and configuration
- **pyproject.toml**: Project-wide test configuration
```ini
[tool:pytest]
# Test paths
testpaths = tests
python_files = test_*.py
# Coverage settings
addopts = --cov=apps --cov=packages --cov-report=html
# Markers
markers =
unit: Unit tests
integration: Integration tests
e2e: End-to-end tests
security: Security tests
slow: Slow tests
```
### Environment Variables ### Environment Variables
```bash ```bash
# Test configuration # Test database (different from production)
export TEST_MODE=true TEST_DATABASE_URL=sqlite:///test_aitbc.db
export TEST_DATABASE_URL="postgresql://localhost/aitbc_test"
export TEST_REDIS_URL="redis://localhost:6379/1"
# Service URLs for integration tests # Test logging
export COORDINATOR_URL="http://localhost:8001" TEST_LOG_LEVEL=DEBUG
export WALLET_URL="http://localhost:8002" TEST_LOG_FILE=/var/log/aitbc/test.log
export BLOCKCHAIN_URL="http://localhost:8545"
# Security test configuration # Test API endpoints
export TEST_HSM_ENDPOINT="http://localhost:9999" TEST_API_BASE_URL=http://localhost:8011
export TEST_ZK_CIRCUITS_PATH="./apps/zk-circuits"
```
### Test Data Management
```python
# Using fixtures in conftest.py
@pytest.fixture
def test_data():
return {
"sample_job": {...},
"sample_receipt": {...},
}
# Custom test configuration
@pytest.fixture(scope="session")
def test_config():
return TestConfig(
database_url="sqlite:///:memory:",
redis_url="redis://localhost:6379/1",
)
``` ```
## CI/CD Integration ## CI/CD Integration
### GitHub Actions Example ### GitHub Actions
Test suite is integrated with CI/CD pipeline:
- **Unit Tests**: Run on every push
- **Integration Tests**: Run on pull requests
- **E2E Tests**: Run on main branch
- **Performance Tests**: Run nightly
```yaml ### Local CI Simulation
name: Tests ```bash
# Simulate CI pipeline locally
python tests/test_runner.py --all --coverage
on: [push, pull_request] # Generate coverage report
coverage html -o coverage_html/
jobs:
unit-tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
with:
python: "3.11"
- name: Install dependencies
run: |
pip install -r requirements.txt
pip install -r requirements-test.txt
- name: Run unit tests
run: |
pytest tests/unit/ -v --cov=apps --cov-report=xml
- name: Upload coverage
uses: codecov/codecov-action@v3
with:
file: ./coverage.xml
integration-tests:
runs-on: ubuntu-latest
services:
postgres:
image: postgres:15
env:
POSTGRES_PASSWORD: postgres
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
redis:
image: redis:7
options: >-
--health-cmd "redis-cli ping"
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
with:
python: "3.11"
- name: Install dependencies
run: |
pip install -r requirements.txt
pip install -r requirements-test.txt
- name: Run integration tests
run: |
pytest tests/integration/ -v
env:
DATABASE_URL: postgresql://postgres:postgres@localhost/postgres
REDIS_URL: redis://localhost:6379/0
```
### Docker Compose for Testing
```yaml
# docker-compose.test.yml
version: '3.8'
services:
postgres:
image: postgres:15
environment:
POSTGRES_DB: aitbc_test
POSTGRES_USER: test
POSTGRES_PASSWORD: test
ports:
- "5433:5432"
healthcheck:
test: ["CMD-SHELL", "pg_isready -U test"]
interval: 5s
timeout: 5s
retries: 5
redis:
image: redis:7-alpine
ports:
- "6380:6379"
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 5s
retries: 5
coordinator:
build: ./apps/coordinator-api
environment:
DATABASE_URL: postgresql://test:test@postgres:5432/aitbc_test
REDIS_URL: redis://redis:6379/0
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
ports:
- "8001:8000"
``` ```
## Troubleshooting ## Troubleshooting
### Common Issues ### Common Issues
1. **Import Errors** #### Test Failures Due to Services
```bash
# Ensure PYTHONPATH is set
export PYTHONPATH="${PYTHONPATH}:$(pwd)"
# Or install in development mode
pip install -e .
```
2. **Database Connection Errors**
```bash
# Check if PostgreSQL is running
pg_isready -h localhost -p 5432
# Create test database
createdb -h localhost -p 5432 aitbc_test
```
3. **Redis Connection Errors**
```bash
# Check if Redis is running
redis-cli ping
# Use correct database
redis-cli -n 1 FLUSHDB
```
4. **Test Timeouts**
```bash
# Increase timeout for slow tests
pytest --timeout=600
# Run tests sequentially
pytest -n 0
```
5. **Port Conflicts**
```bash
# Kill processes using ports
lsof -ti:8001 | xargs kill -9
lsof -ti:8002 | xargs kill -9
```
### Debugging Tests
```bash ```bash
# Run with verbose output # Check service status
pytest -v -s systemctl status aitbc-blockchain-node
systemctl status aitbc-coordinator
# Stop on first failure # Restart services if needed
pytest -x sudo systemctl restart aitbc-blockchain-node
sudo systemctl restart aitbc-coordinator
# Run with pdb on failure
pytest --pdb
# Print local variables on failure
pytest --tb=long
# Run specific test with debugging
pytest tests/unit/test_coordinator_api.py::TestJobEndpoints::test_create_job_success -v -s --pdb
``` ```
### Performance Issues #### Environment Issues
```bash ```bash
# Profile test execution # Check virtual environment
pytest --profile which python
python --version
# Find slowest tests # Check dependencies
pytest --durations=10 pip list | grep pytest
# Run with memory profiling # Reinstall if needed
pytest --memprof pip install -e .
``` ```
### Test Data Issues #### Database Issues
```bash ```bash
# Clean test database # Reset test database
psql -h localhost -U test -d aitbc_test -c "DROP SCHEMA public CASCADE; CREATE SCHEMA public;" rm test_aitbc.db
python -m alembic upgrade head
# Reset Redis # Check database connectivity
redis-cli -n 1 FLUSHALL python -c "from aitbc_core.db import engine; print(engine.url)"
# Regenerate test fixtures
python tests/generate_fixtures.py
``` ```
## Best Practices ### Test Logs
All test logs are now centralized in `/var/log/aitbc/`:
- **test.log**: General test output
- **test_results.txt**: Test results summary
- **performance_test.log**: Performance test results
1. **Write Isolated Tests**: Each test should be independent ### Getting Help
2. **Use Descriptive Names**: Test names should describe what they test 1. Check test logs in `/var/log/aitbc/`
3. **Mock External Dependencies**: Use mocks for external services 2. Review test documentation in `tests/docs/`
4. **Clean Up Resources**: Use fixtures for setup/teardown 3. Run tests with verbose output: `pytest -v`
5. **Test Edge Cases**: Don't just test happy paths 4. Check service status and configuration
6. **Use Type Hints**: Makes tests more maintainable
7. **Document Complex Tests**: Add comments for complex logic
## Contributing ---
When adding new tests: *Last updated: March 30, 2026*
*For questions or suggestions, please open an issue or contact the development team.*
1. Follow the existing structure and naming conventions ---
2. Add appropriate markers (`@pytest.mark.unit`, etc.)
3. Update this README if adding new test types
4. Ensure tests pass on CI before submitting PR
5. Add coverage for new features
## Resources *Last updated: March 30, 2026*
*For questions or suggestions, please open an issue or contact the development team.*
- [Pytest Documentation](https://docs.pytest.org/)
- [Locust Documentation](https://docs.locust.io/)
- [Security Testing Guide](https://owasp.org/www-project-security-testing-guide/)
- [Load Testing Best Practices](https://docs.locust.io/en/stable/writing-a-locustfile.html)