feat: implement CLI blockchain features and pool hub enhancements
Some checks failed
API Endpoint Tests / test-api-endpoints (push) Successful in 11s
CLI Tests / test-cli (push) Failing after 7s
Documentation Validation / validate-docs (push) Successful in 8s
Documentation Validation / validate-policies-strict (push) Successful in 3s
Integration Tests / test-service-integration (push) Successful in 38s
Python Tests / test-python (push) Successful in 11s
Security Scanning / security-scan (push) Successful in 29s
Multi-Node Blockchain Health Monitoring / health-check (push) Successful in 1s

CLI Blockchain Features:
- Added block operations: import, export, import-chain, blocks-range
- Added messaging system commands (deploy, state, topics, create-topic, messages, post, vote, search, reputation, moderate)
- Added network force-sync operation
- Replaced marketplace handlers with actual RPC calls
- Replaced AI handlers with actual RPC calls
- Added account operations (account get)
- Added transaction query operations
- Added mempool query operations
- Created keystore_auth.py for authentication
- Removed extended features interception
- All handlers use keystore credentials for authenticated endpoints

Pool Hub Enhancements:
- Added SLA monitoring and capacity tables
- Added billing integration service
- Added SLA collector service
- Added SLA router endpoints
- Updated pool hub models and settings
- Added integration tests for billing and SLA
- Updated documentation with SLA monitoring guide
This commit is contained in:
aitbc
2026-04-22 15:59:00 +02:00
parent 51920a15d7
commit e22d864944
28 changed files with 4783 additions and 358 deletions

View File

@@ -9,3 +9,87 @@ Matchmaking gateway between coordinator job requests and available miners. See `
- Create a Python virtual environment under `apps/pool-hub/.venv`.
- Install FastAPI, Redis (optional), and PostgreSQL client dependencies once requirements are defined.
- Implement routers and registry as described in the bootstrap document.
## SLA Monitoring and Billing Integration
Pool-Hub now includes comprehensive SLA monitoring and billing integration with coordinator-api:
### SLA Metrics
- **Miner Uptime**: Tracks miner availability based on heartbeat intervals
- **Response Time**: Monitors average response time from match results
- **Job Completion Rate**: Tracks successful vs failed job outcomes
- **Capacity Availability**: Monitors overall pool capacity utilization
### SLA Thresholds
Default thresholds (configurable in settings):
- Uptime: 95%
- Response Time: 1000ms
- Completion Rate: 90%
- Capacity Availability: 80%
### Billing Integration
Pool-Hub integrates with coordinator-api's billing system to:
- Record usage data (gpu_hours, api_calls, compute_hours)
- Sync miner usage to tenant billing
- Generate invoices via coordinator-api
- Track billing metrics and costs
### API Endpoints
SLA and billing endpoints are available under `/sla/`:
- `GET /sla/metrics/{miner_id}` - Get SLA metrics for a miner
- `GET /sla/metrics` - Get SLA metrics across all miners
- `GET /sla/violations` - Get SLA violations
- `POST /sla/metrics/collect` - Trigger SLA metrics collection
- `GET /sla/capacity/snapshots` - Get capacity planning snapshots
- `GET /sla/capacity/forecast` - Get capacity forecast
- `GET /sla/capacity/recommendations` - Get scaling recommendations
- `GET /sla/billing/usage` - Get billing usage data
- `POST /sla/billing/sync` - Trigger billing sync with coordinator-api
### Configuration
Add to `.env`:
```bash
# Coordinator-API Billing Integration
COORDINATOR_BILLING_URL=http://localhost:8011
COORDINATOR_API_KEY=your_api_key_here
# SLA Configuration
SLA_UPTIME_THRESHOLD=95.0
SLA_RESPONSE_TIME_THRESHOLD=1000.0
SLA_COMPLETION_RATE_THRESHOLD=90.0
SLA_CAPACITY_THRESHOLD=80.0
# Capacity Planning
CAPACITY_FORECAST_HOURS=168
CAPACITY_ALERT_THRESHOLD_PCT=80.0
# Billing Sync
BILLING_SYNC_INTERVAL_HOURS=1
# SLA Collection
SLA_COLLECTION_INTERVAL_SECONDS=300
```
### Database Migration
Run the database migration to add SLA and capacity tables:
```bash
cd apps/pool-hub
alembic upgrade head
```
### Testing
Run tests for SLA and billing integration:
```bash
cd apps/pool-hub
pytest tests/test_sla_collector.py
pytest tests/test_billing_integration.py
pytest tests/test_sla_endpoints.py
pytest tests/test_integration_coordinator.py
```

112
apps/pool-hub/alembic.ini Normal file
View File

@@ -0,0 +1,112 @@
# A generic, single database configuration.
[alembic]
# path to migration scripts
script_location = migrations
# template used to generate migration file names; The default value is %%(rev)s_%%(slug)s
file_template = %%(year)d%%(month).2d%%(day).2d_%%(hour).2d%%(minute).2d_%%(rev)s_%%(slug)s
# sys.path path, will be prepended to sys.path if present.
prepend_sys_path = .
# timezone to use when rendering the date within the migration file
# as well as the filename.
# If specified, requires the python-dateutil library that can be
# installed by adding `alembic[tz]` to the pip requirements
# string value is passed to dateutil.tz.gettz()
# leave blank for localtime
# timezone =
# max length of characters to apply to the
# "slug" field
# truncate_slug_length = 40
# set to 'true' to run the environment during
# the 'revision' command, regardless of autogenerate
# revision_environment = false
# set to 'true' to allow .pyc and .pyo files without
# a source .py file to be detected as revisions in the
# versions/ directory
# sourceless = false
# version location specification; This defaults
# to migrations/versions. When using multiple version
# directories, initial revisions must be specified with --version-path.
# The path separator used here should be the separator specified by "version_path_separator" below.
# version_locations = %(here)s/bar:%(here)s/bat:versions/versions
# version path separator; As mentioned above, this is the character used to split
# version_locations. The default within new alembic.ini files is "os", which uses os.pathsep.
# If this key is omitted entirely, it falls back to the legacy behavior of splitting on spaces and/or commas.
# Valid values for version_path_separator are:
#
# version_path_separator = :
# version_path_separator = ;
# version_path_separator = space
version_path_separator = os # Use os.pathsep. Default configuration used for new projects.
# set to 'true' to search source files recursively
# in each "version_locations" directory
# new in Alembic version 1.10
# recursive_version_locations = false
# the output encoding used when revision files
# are written from script.py.mako
# output_encoding = utf-8
sqlalchemy.url = postgresql+asyncpg://user:pass@localhost/dbname
[post_write_hooks]
# post_write_hooks defines scripts or Python functions that are run
# on newly generated revision scripts. See the documentation for further
# detail and examples
# format using "black" - use the console_scripts runner, against the "black" entrypoint
# hooks = black
# black.type = console_scripts
# black.entrypoint = black
# black.options = -l 79 REVISION_SCRIPT_FILENAME
# lint with attempts to fix using "ruff" - use the exec runner, execute a binary
# hooks = ruff
# ruff.type = exec
# ruff.executable = %(here)s/.venv/bin/ruff
# ruff.options = --fix REVISION_SCRIPT_FILENAME
# Logging configuration
[loggers]
keys = root,sqlalchemy,alembic
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = WARN
handlers = console
qualname =
[logger_sqlalchemy]
level = WARN
handlers =
qualname = sqlalchemy.engine
[logger_alembic]
level = INFO
handlers =
qualname = alembic
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(levelname)-5.5s [%(name)s] %(message)s
datefmt = %H:%M:%S

View File

@@ -22,7 +22,6 @@ def _configure_context(connection=None, *, url: str | None = None) -> None:
connection=connection,
url=url,
target_metadata=target_metadata,
literal_binds=True,
dialect_opts={"paramstyle": "named"},
)

View File

@@ -10,7 +10,6 @@ from __future__ import annotations
from alembic import op
import sqlalchemy as sa
from sqlalchemy.dialects import postgresql
# revision identifiers, used by Alembic.
revision = "a58c1f3b3e87"
@@ -34,8 +33,8 @@ def upgrade() -> None:
sa.Column("ram_gb", sa.Float()),
sa.Column("max_parallel", sa.Integer()),
sa.Column("base_price", sa.Float()),
sa.Column("tags", postgresql.JSONB(astext_type=sa.Text())),
sa.Column("capabilities", postgresql.JSONB(astext_type=sa.Text())),
sa.Column("tags", sa.JSON()),
sa.Column("capabilities", sa.JSON()),
sa.Column("trust_score", sa.Float(), server_default="0.5"),
sa.Column("region", sa.String(length=64)),
)
@@ -53,18 +52,18 @@ def upgrade() -> None:
op.create_table(
"match_requests",
sa.Column("id", postgresql.UUID(as_uuid=True), primary_key=True),
sa.Column("id", sa.String(36), primary_key=True),
sa.Column("job_id", sa.String(length=64), nullable=False),
sa.Column("requirements", postgresql.JSONB(astext_type=sa.Text()), nullable=False),
sa.Column("hints", postgresql.JSONB(astext_type=sa.Text()), server_default=sa.text("'{}'::jsonb")),
sa.Column("requirements", sa.JSON(), nullable=False),
sa.Column("hints", sa.JSON(), server_default=sa.text("'{}'")),
sa.Column("top_k", sa.Integer(), server_default="1"),
sa.Column("created_at", sa.DateTime(timezone=True), server_default=sa.text("NOW()")),
)
op.create_table(
"match_results",
sa.Column("id", postgresql.UUID(as_uuid=True), primary_key=True),
sa.Column("request_id", postgresql.UUID(as_uuid=True), sa.ForeignKey("match_requests.id", ondelete="CASCADE"), nullable=False),
sa.Column("id", sa.String(36), primary_key=True),
sa.Column("request_id", sa.String(36), sa.ForeignKey("match_requests.id", ondelete="CASCADE"), nullable=False),
sa.Column("miner_id", sa.String(length=64), nullable=False),
sa.Column("score", sa.Float(), nullable=False),
sa.Column("explain", sa.Text()),
@@ -76,7 +75,7 @@ def upgrade() -> None:
op.create_table(
"feedback",
sa.Column("id", postgresql.UUID(as_uuid=True), primary_key=True),
sa.Column("id", sa.String(36), primary_key=True),
sa.Column("job_id", sa.String(length=64), nullable=False),
sa.Column("miner_id", sa.String(length=64), sa.ForeignKey("miners.miner_id", ondelete="CASCADE"), nullable=False),
sa.Column("outcome", sa.String(length=32), nullable=False),

View File

@@ -0,0 +1,124 @@
"""add sla and capacity tables
Revision ID: b2a1c4d5e6f7
Revises: a58c1f3b3e87
Create Date: 2026-04-22 15:00:00.000000
"""
from __future__ import annotations
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = "b2a1c4d5e6f7"
down_revision = "a58c1f3b3e87"
branch_labels = None
depends_on = None
def upgrade() -> None:
# Add new columns to miner_status table
op.add_column(
"miner_status",
sa.Column("uptime_pct", sa.Float(), nullable=True),
)
op.add_column(
"miner_status",
sa.Column("last_heartbeat_at", sa.DateTime(timezone=True), nullable=True),
)
# Create sla_metrics table
op.create_table(
"sla_metrics",
sa.Column(
"id",
sa.String(36),
primary_key=True,
),
sa.Column(
"miner_id",
sa.String(length=64),
sa.ForeignKey("miners.miner_id", ondelete="CASCADE"),
nullable=False,
),
sa.Column("metric_type", sa.String(length=32), nullable=False),
sa.Column("metric_value", sa.Float(), nullable=False),
sa.Column("threshold", sa.Float(), nullable=False),
sa.Column("is_violation", sa.Boolean(), server_default=sa.text("false")),
sa.Column("timestamp", sa.DateTime(timezone=True), server_default=sa.text("NOW()")),
sa.Column("meta_data", sa.JSON(), server_default=sa.text("'{}'")),
)
op.create_index("ix_sla_metrics_miner_id", "sla_metrics", ["miner_id"])
op.create_index("ix_sla_metrics_timestamp", "sla_metrics", ["timestamp"])
op.create_index("ix_sla_metrics_metric_type", "sla_metrics", ["metric_type"])
# Create sla_violations table
op.create_table(
"sla_violations",
sa.Column(
"id",
sa.String(36),
primary_key=True,
),
sa.Column(
"miner_id",
sa.String(length=64),
sa.ForeignKey("miners.miner_id", ondelete="CASCADE"),
nullable=False,
),
sa.Column("violation_type", sa.String(length=32), nullable=False),
sa.Column("severity", sa.String(length=16), nullable=False),
sa.Column("metric_value", sa.Float(), nullable=False),
sa.Column("threshold", sa.Float(), nullable=False),
sa.Column("violation_duration_ms", sa.Integer(), nullable=True),
sa.Column("resolved_at", sa.DateTime(timezone=True), nullable=True),
sa.Column("created_at", sa.DateTime(timezone=True), server_default=sa.text("NOW()")),
sa.Column("meta_data", sa.JSON(), server_default=sa.text("'{}'")),
)
op.create_index("ix_sla_violations_miner_id", "sla_violations", ["miner_id"])
op.create_index("ix_sla_violations_created_at", "sla_violations", ["created_at"])
op.create_index("ix_sla_violations_severity", "sla_violations", ["severity"])
# Create capacity_snapshots table
op.create_table(
"capacity_snapshots",
sa.Column(
"id",
sa.String(36),
primary_key=True,
),
sa.Column("total_miners", sa.Integer(), nullable=False),
sa.Column("active_miners", sa.Integer(), nullable=False),
sa.Column("total_parallel_capacity", sa.Integer(), nullable=False),
sa.Column("total_queue_length", sa.Integer(), nullable=False),
sa.Column("capacity_utilization_pct", sa.Float(), nullable=False),
sa.Column("forecast_capacity", sa.Integer(), nullable=False),
sa.Column("recommended_scaling", sa.String(length=32), nullable=False),
sa.Column("scaling_reason", sa.Text(), nullable=True),
sa.Column("timestamp", sa.DateTime(timezone=True), server_default=sa.text("NOW()")),
sa.Column("meta_data", sa.JSON(), server_default=sa.text("'{}'")),
)
op.create_index("ix_capacity_snapshots_timestamp", "capacity_snapshots", ["timestamp"])
def downgrade() -> None:
# Drop capacity_snapshots table
op.drop_index("ix_capacity_snapshots_timestamp", table_name="capacity_snapshots")
op.drop_table("capacity_snapshots")
# Drop sla_violations table
op.drop_index("ix_sla_violations_severity", table_name="sla_violations")
op.drop_index("ix_sla_violations_created_at", table_name="sla_violations")
op.drop_index("ix_sla_violations_miner_id", table_name="sla_violations")
op.drop_table("sla_violations")
# Drop sla_metrics table
op.drop_index("ix_sla_metrics_metric_type", table_name="sla_metrics")
op.drop_index("ix_sla_metrics_timestamp", table_name="sla_metrics")
op.drop_index("ix_sla_metrics_miner_id", table_name="sla_metrics")
op.drop_table("sla_metrics")
# Remove columns from miner_status table
op.drop_column("miner_status", "last_heartbeat_at")
op.drop_column("miner_status", "uptime_pct")

View File

@@ -1,4 +1,4 @@
# This file is automatically @generated by Poetry 2.3.2 and should not be changed by hand.
# This file is automatically @generated by Poetry 2.3.3 and should not be changed by hand.
[[package]]
name = "aiosqlite"
@@ -24,19 +24,43 @@ name = "aitbc-core"
version = "0.1.0"
description = "AITBC Core Utilities"
optional = false
python-versions = "^3.13"
python-versions = ">=3.13"
groups = ["main"]
files = []
develop = false
[package.dependencies]
pydantic = "^2.7.0"
python-json-logger = "^2.0.7"
cryptography = ">=41.0.0"
fastapi = ">=0.104.0"
pydantic = ">=2.5.0"
redis = ">=5.0.0"
sqlmodel = ">=0.0.14"
uvicorn = ">=0.24.0"
[package.source]
type = "directory"
url = "../../packages/py/aitbc-core"
[[package]]
name = "alembic"
version = "1.18.4"
description = "A database migration tool for SQLAlchemy."
optional = false
python-versions = ">=3.10"
groups = ["main"]
files = [
{file = "alembic-1.18.4-py3-none-any.whl", hash = "sha256:a5ed4adcf6d8a4cb575f3d759f071b03cd6e5c7618eb796cb52497be25bfe19a"},
{file = "alembic-1.18.4.tar.gz", hash = "sha256:cb6e1fd84b6174ab8dbb2329f86d631ba9559dd78df550b57804d607672cedbc"},
]
[package.dependencies]
Mako = "*"
SQLAlchemy = ">=1.4.23"
typing-extensions = ">=4.12"
[package.extras]
tz = ["tzdata"]
[[package]]
name = "annotated-doc"
version = "0.0.4"
@@ -81,58 +105,67 @@ trio = ["trio (>=0.31.0) ; python_version < \"3.10\"", "trio (>=0.32.0) ; python
[[package]]
name = "asyncpg"
version = "0.29.0"
version = "0.30.0"
description = "An asyncio PostgreSQL driver"
optional = false
python-versions = ">=3.8.0"
groups = ["main"]
files = [
{file = "asyncpg-0.29.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:72fd0ef9f00aeed37179c62282a3d14262dbbafb74ec0ba16e1b1864d8a12169"},
{file = "asyncpg-0.29.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:52e8f8f9ff6e21f9b39ca9f8e3e33a5fcdceaf5667a8c5c32bee158e313be385"},
{file = "asyncpg-0.29.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a9e6823a7012be8b68301342ba33b4740e5a166f6bbda0aee32bc01638491a22"},
{file = "asyncpg-0.29.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:746e80d83ad5d5464cfbf94315eb6744222ab00aa4e522b704322fb182b83610"},
{file = "asyncpg-0.29.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:ff8e8109cd6a46ff852a5e6bab8b0a047d7ea42fcb7ca5ae6eaae97d8eacf397"},
{file = "asyncpg-0.29.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:97eb024685b1d7e72b1972863de527c11ff87960837919dac6e34754768098eb"},
{file = "asyncpg-0.29.0-cp310-cp310-win32.whl", hash = "sha256:5bbb7f2cafd8d1fa3e65431833de2642f4b2124be61a449fa064e1a08d27e449"},
{file = "asyncpg-0.29.0-cp310-cp310-win_amd64.whl", hash = "sha256:76c3ac6530904838a4b650b2880f8e7af938ee049e769ec2fba7cd66469d7772"},
{file = "asyncpg-0.29.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:d4900ee08e85af01adb207519bb4e14b1cae8fd21e0ccf80fac6aa60b6da37b4"},
{file = "asyncpg-0.29.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:a65c1dcd820d5aea7c7d82a3fdcb70e096f8f70d1a8bf93eb458e49bfad036ac"},
{file = "asyncpg-0.29.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5b52e46f165585fd6af4863f268566668407c76b2c72d366bb8b522fa66f1870"},
{file = "asyncpg-0.29.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dc600ee8ef3dd38b8d67421359779f8ccec30b463e7aec7ed481c8346decf99f"},
{file = "asyncpg-0.29.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:039a261af4f38f949095e1e780bae84a25ffe3e370175193174eb08d3cecab23"},
{file = "asyncpg-0.29.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:6feaf2d8f9138d190e5ec4390c1715c3e87b37715cd69b2c3dfca616134efd2b"},
{file = "asyncpg-0.29.0-cp311-cp311-win32.whl", hash = "sha256:1e186427c88225ef730555f5fdda6c1812daa884064bfe6bc462fd3a71c4b675"},
{file = "asyncpg-0.29.0-cp311-cp311-win_amd64.whl", hash = "sha256:cfe73ffae35f518cfd6e4e5f5abb2618ceb5ef02a2365ce64f132601000587d3"},
{file = "asyncpg-0.29.0-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:6011b0dc29886ab424dc042bf9eeb507670a3b40aece3439944006aafe023178"},
{file = "asyncpg-0.29.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:b544ffc66b039d5ec5a7454667f855f7fec08e0dfaf5a5490dfafbb7abbd2cfb"},
{file = "asyncpg-0.29.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d84156d5fb530b06c493f9e7635aa18f518fa1d1395ef240d211cb563c4e2364"},
{file = "asyncpg-0.29.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:54858bc25b49d1114178d65a88e48ad50cb2b6f3e475caa0f0c092d5f527c106"},
{file = "asyncpg-0.29.0-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:bde17a1861cf10d5afce80a36fca736a86769ab3579532c03e45f83ba8a09c59"},
{file = "asyncpg-0.29.0-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:37a2ec1b9ff88d8773d3eb6d3784dc7e3fee7756a5317b67f923172a4748a175"},
{file = "asyncpg-0.29.0-cp312-cp312-win32.whl", hash = "sha256:bb1292d9fad43112a85e98ecdc2e051602bce97c199920586be83254d9dafc02"},
{file = "asyncpg-0.29.0-cp312-cp312-win_amd64.whl", hash = "sha256:2245be8ec5047a605e0b454c894e54bf2ec787ac04b1cb7e0d3c67aa1e32f0fe"},
{file = "asyncpg-0.29.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:0009a300cae37b8c525e5b449233d59cd9868fd35431abc470a3e364d2b85cb9"},
{file = "asyncpg-0.29.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:5cad1324dbb33f3ca0cd2074d5114354ed3be2b94d48ddfd88af75ebda7c43cc"},
{file = "asyncpg-0.29.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:012d01df61e009015944ac7543d6ee30c2dc1eb2f6b10b62a3f598beb6531548"},
{file = "asyncpg-0.29.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:000c996c53c04770798053e1730d34e30cb645ad95a63265aec82da9093d88e7"},
{file = "asyncpg-0.29.0-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:e0bfe9c4d3429706cf70d3249089de14d6a01192d617e9093a8e941fea8ee775"},
{file = "asyncpg-0.29.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:642a36eb41b6313ffa328e8a5c5c2b5bea6ee138546c9c3cf1bffaad8ee36dd9"},
{file = "asyncpg-0.29.0-cp38-cp38-win32.whl", hash = "sha256:a921372bbd0aa3a5822dd0409da61b4cd50df89ae85150149f8c119f23e8c408"},
{file = "asyncpg-0.29.0-cp38-cp38-win_amd64.whl", hash = "sha256:103aad2b92d1506700cbf51cd8bb5441e7e72e87a7b3a2ca4e32c840f051a6a3"},
{file = "asyncpg-0.29.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:5340dd515d7e52f4c11ada32171d87c05570479dc01dc66d03ee3e150fb695da"},
{file = "asyncpg-0.29.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:e17b52c6cf83e170d3d865571ba574577ab8e533e7361a2b8ce6157d02c665d3"},
{file = "asyncpg-0.29.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f100d23f273555f4b19b74a96840aa27b85e99ba4b1f18d4ebff0734e78dc090"},
{file = "asyncpg-0.29.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:48e7c58b516057126b363cec8ca02b804644fd012ef8e6c7e23386b7d5e6ce83"},
{file = "asyncpg-0.29.0-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:f9ea3f24eb4c49a615573724d88a48bd1b7821c890c2effe04f05382ed9e8810"},
{file = "asyncpg-0.29.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:8d36c7f14a22ec9e928f15f92a48207546ffe68bc412f3be718eedccdf10dc5c"},
{file = "asyncpg-0.29.0-cp39-cp39-win32.whl", hash = "sha256:797ab8123ebaed304a1fad4d7576d5376c3a006a4100380fb9d517f0b59c1ab2"},
{file = "asyncpg-0.29.0-cp39-cp39-win_amd64.whl", hash = "sha256:cce08a178858b426ae1aa8409b5cc171def45d4293626e7aa6510696d46decd8"},
{file = "asyncpg-0.29.0.tar.gz", hash = "sha256:d1c49e1f44fffafd9a55e1a9b101590859d881d639ea2922516f5d9c512d354e"},
{file = "asyncpg-0.30.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:bfb4dd5ae0699bad2b233672c8fc5ccbd9ad24b89afded02341786887e37927e"},
{file = "asyncpg-0.30.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:dc1f62c792752a49f88b7e6f774c26077091b44caceb1983509edc18a2222ec0"},
{file = "asyncpg-0.30.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3152fef2e265c9c24eec4ee3d22b4f4d2703d30614b0b6753e9ed4115c8a146f"},
{file = "asyncpg-0.30.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c7255812ac85099a0e1ffb81b10dc477b9973345793776b128a23e60148dd1af"},
{file = "asyncpg-0.30.0-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:578445f09f45d1ad7abddbff2a3c7f7c291738fdae0abffbeb737d3fc3ab8b75"},
{file = "asyncpg-0.30.0-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:c42f6bb65a277ce4d93f3fba46b91a265631c8df7250592dd4f11f8b0152150f"},
{file = "asyncpg-0.30.0-cp310-cp310-win32.whl", hash = "sha256:aa403147d3e07a267ada2ae34dfc9324e67ccc4cdca35261c8c22792ba2b10cf"},
{file = "asyncpg-0.30.0-cp310-cp310-win_amd64.whl", hash = "sha256:fb622c94db4e13137c4c7f98834185049cc50ee01d8f657ef898b6407c7b9c50"},
{file = "asyncpg-0.30.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:5e0511ad3dec5f6b4f7a9e063591d407eee66b88c14e2ea636f187da1dcfff6a"},
{file = "asyncpg-0.30.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:915aeb9f79316b43c3207363af12d0e6fd10776641a7de8a01212afd95bdf0ed"},
{file = "asyncpg-0.30.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1c198a00cce9506fcd0bf219a799f38ac7a237745e1d27f0e1f66d3707c84a5a"},
{file = "asyncpg-0.30.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3326e6d7381799e9735ca2ec9fd7be4d5fef5dcbc3cb555d8a463d8460607956"},
{file = "asyncpg-0.30.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:51da377487e249e35bd0859661f6ee2b81db11ad1f4fc036194bc9cb2ead5056"},
{file = "asyncpg-0.30.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:bc6d84136f9c4d24d358f3b02be4b6ba358abd09f80737d1ac7c444f36108454"},
{file = "asyncpg-0.30.0-cp311-cp311-win32.whl", hash = "sha256:574156480df14f64c2d76450a3f3aaaf26105869cad3865041156b38459e935d"},
{file = "asyncpg-0.30.0-cp311-cp311-win_amd64.whl", hash = "sha256:3356637f0bd830407b5597317b3cb3571387ae52ddc3bca6233682be88bbbc1f"},
{file = "asyncpg-0.30.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:c902a60b52e506d38d7e80e0dd5399f657220f24635fee368117b8b5fce1142e"},
{file = "asyncpg-0.30.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:aca1548e43bbb9f0f627a04666fedaca23db0a31a84136ad1f868cb15deb6e3a"},
{file = "asyncpg-0.30.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6c2a2ef565400234a633da0eafdce27e843836256d40705d83ab7ec42074efb3"},
{file = "asyncpg-0.30.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1292b84ee06ac8a2ad8e51c7475aa309245874b61333d97411aab835c4a2f737"},
{file = "asyncpg-0.30.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:0f5712350388d0cd0615caec629ad53c81e506b1abaaf8d14c93f54b35e3595a"},
{file = "asyncpg-0.30.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:db9891e2d76e6f425746c5d2da01921e9a16b5a71a1c905b13f30e12a257c4af"},
{file = "asyncpg-0.30.0-cp312-cp312-win32.whl", hash = "sha256:68d71a1be3d83d0570049cd1654a9bdfe506e794ecc98ad0873304a9f35e411e"},
{file = "asyncpg-0.30.0-cp312-cp312-win_amd64.whl", hash = "sha256:9a0292c6af5c500523949155ec17b7fe01a00ace33b68a476d6b5059f9630305"},
{file = "asyncpg-0.30.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:05b185ebb8083c8568ea8a40e896d5f7af4b8554b64d7719c0eaa1eb5a5c3a70"},
{file = "asyncpg-0.30.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:c47806b1a8cbb0a0db896f4cd34d89942effe353a5035c62734ab13b9f938da3"},
{file = "asyncpg-0.30.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9b6fde867a74e8c76c71e2f64f80c64c0f3163e687f1763cfaf21633ec24ec33"},
{file = "asyncpg-0.30.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:46973045b567972128a27d40001124fbc821c87a6cade040cfcd4fa8a30bcdc4"},
{file = "asyncpg-0.30.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:9110df111cabc2ed81aad2f35394a00cadf4f2e0635603db6ebbd0fc896f46a4"},
{file = "asyncpg-0.30.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:04ff0785ae7eed6cc138e73fc67b8e51d54ee7a3ce9b63666ce55a0bf095f7ba"},
{file = "asyncpg-0.30.0-cp313-cp313-win32.whl", hash = "sha256:ae374585f51c2b444510cdf3595b97ece4f233fde739aa14b50e0d64e8a7a590"},
{file = "asyncpg-0.30.0-cp313-cp313-win_amd64.whl", hash = "sha256:f59b430b8e27557c3fb9869222559f7417ced18688375825f8f12302c34e915e"},
{file = "asyncpg-0.30.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:29ff1fc8b5bf724273782ff8b4f57b0f8220a1b2324184846b39d1ab4122031d"},
{file = "asyncpg-0.30.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:64e899bce0600871b55368b8483e5e3e7f1860c9482e7f12e0a771e747988168"},
{file = "asyncpg-0.30.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5b290f4726a887f75dcd1b3006f484252db37602313f806e9ffc4e5996cfe5cb"},
{file = "asyncpg-0.30.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f86b0e2cd3f1249d6fe6fd6cfe0cd4538ba994e2d8249c0491925629b9104d0f"},
{file = "asyncpg-0.30.0-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:393af4e3214c8fa4c7b86da6364384c0d1b3298d45803375572f415b6f673f38"},
{file = "asyncpg-0.30.0-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:fd4406d09208d5b4a14db9a9dbb311b6d7aeeab57bded7ed2f8ea41aeef39b34"},
{file = "asyncpg-0.30.0-cp38-cp38-win32.whl", hash = "sha256:0b448f0150e1c3b96cb0438a0d0aa4871f1472e58de14a3ec320dbb2798fb0d4"},
{file = "asyncpg-0.30.0-cp38-cp38-win_amd64.whl", hash = "sha256:f23b836dd90bea21104f69547923a02b167d999ce053f3d502081acea2fba15b"},
{file = "asyncpg-0.30.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6f4e83f067b35ab5e6371f8a4c93296e0439857b4569850b178a01385e82e9ad"},
{file = "asyncpg-0.30.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:5df69d55add4efcd25ea2a3b02025b669a285b767bfbf06e356d68dbce4234ff"},
{file = "asyncpg-0.30.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a3479a0d9a852c7c84e822c073622baca862d1217b10a02dd57ee4a7a081f708"},
{file = "asyncpg-0.30.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:26683d3b9a62836fad771a18ecf4659a30f348a561279d6227dab96182f46144"},
{file = "asyncpg-0.30.0-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:1b982daf2441a0ed314bd10817f1606f1c28b1136abd9e4f11335358c2c631cb"},
{file = "asyncpg-0.30.0-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:1c06a3a50d014b303e5f6fc1e5f95eb28d2cee89cf58384b700da621e5d5e547"},
{file = "asyncpg-0.30.0-cp39-cp39-win32.whl", hash = "sha256:1b11a555a198b08f5c4baa8f8231c74a366d190755aa4f99aacec5970afe929a"},
{file = "asyncpg-0.30.0-cp39-cp39-win_amd64.whl", hash = "sha256:8b684a3c858a83cd876f05958823b68e8d14ec01bb0c0d14a6704c5bf9711773"},
{file = "asyncpg-0.30.0.tar.gz", hash = "sha256:c551e9928ab6707602f44811817f82ba3c446e018bfe1d3abecc8ba5f3eac851"},
]
[package.extras]
docs = ["Sphinx (>=5.3.0,<5.4.0)", "sphinx-rtd-theme (>=1.2.2)", "sphinxcontrib-asyncio (>=0.3.0,<0.4.0)"]
test = ["flake8 (>=6.1,<7.0)", "uvloop (>=0.15.3) ; platform_system != \"Windows\" and python_version < \"3.12.0\""]
docs = ["Sphinx (>=8.1.3,<8.2.0)", "sphinx-rtd-theme (>=1.2.2)"]
gssauth = ["gssapi ; platform_system != \"Windows\"", "sspilib ; platform_system == \"Windows\""]
test = ["distro (>=1.9.0,<1.10.0)", "flake8 (>=6.1,<7.0)", "flake8-pyi (>=24.1.0,<24.2.0)", "gssapi ; platform_system == \"Linux\"", "k5test ; platform_system == \"Linux\"", "mypy (>=1.8.0,<1.9.0)", "sspilib ; platform_system == \"Windows\"", "uvloop (>=0.15.3) ; platform_system != \"Windows\" and python_version < \"3.14.0\""]
[[package]]
name = "certifi"
@@ -146,6 +179,104 @@ files = [
{file = "certifi-2026.2.25.tar.gz", hash = "sha256:e887ab5cee78ea814d3472169153c2d12cd43b14bd03329a39a9c6e2e80bfba7"},
]
[[package]]
name = "cffi"
version = "2.0.0"
description = "Foreign Function Interface for Python calling C code."
optional = false
python-versions = ">=3.9"
groups = ["main"]
markers = "platform_python_implementation != \"PyPy\""
files = [
{file = "cffi-2.0.0-cp310-cp310-macosx_10_13_x86_64.whl", hash = "sha256:0cf2d91ecc3fcc0625c2c530fe004f82c110405f101548512cce44322fa8ac44"},
{file = "cffi-2.0.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:f73b96c41e3b2adedc34a7356e64c8eb96e03a3782b535e043a986276ce12a49"},
{file = "cffi-2.0.0-cp310-cp310-manylinux1_i686.manylinux2014_i686.manylinux_2_17_i686.manylinux_2_5_i686.whl", hash = "sha256:53f77cbe57044e88bbd5ed26ac1d0514d2acf0591dd6bb02a3ae37f76811b80c"},
{file = "cffi-2.0.0-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:3e837e369566884707ddaf85fc1744b47575005c0a229de3327f8f9a20f4efeb"},
{file = "cffi-2.0.0-cp310-cp310-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:5eda85d6d1879e692d546a078b44251cdd08dd1cfb98dfb77b670c97cee49ea0"},
{file = "cffi-2.0.0-cp310-cp310-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:9332088d75dc3241c702d852d4671613136d90fa6881da7d770a483fd05248b4"},
{file = "cffi-2.0.0-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:fc7de24befaeae77ba923797c7c87834c73648a05a4bde34b3b7e5588973a453"},
{file = "cffi-2.0.0-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:cf364028c016c03078a23b503f02058f1814320a56ad535686f90565636a9495"},
{file = "cffi-2.0.0-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:e11e82b744887154b182fd3e7e8512418446501191994dbf9c9fc1f32cc8efd5"},
{file = "cffi-2.0.0-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:8ea985900c5c95ce9db1745f7933eeef5d314f0565b27625d9a10ec9881e1bfb"},
{file = "cffi-2.0.0-cp310-cp310-win32.whl", hash = "sha256:1f72fb8906754ac8a2cc3f9f5aaa298070652a0ffae577e0ea9bd480dc3c931a"},
{file = "cffi-2.0.0-cp310-cp310-win_amd64.whl", hash = "sha256:b18a3ed7d5b3bd8d9ef7a8cb226502c6bf8308df1525e1cc676c3680e7176739"},
{file = "cffi-2.0.0-cp311-cp311-macosx_10_13_x86_64.whl", hash = "sha256:b4c854ef3adc177950a8dfc81a86f5115d2abd545751a304c5bcf2c2c7283cfe"},
{file = "cffi-2.0.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:2de9a304e27f7596cd03d16f1b7c72219bd944e99cc52b84d0145aefb07cbd3c"},
{file = "cffi-2.0.0-cp311-cp311-manylinux1_i686.manylinux2014_i686.manylinux_2_17_i686.manylinux_2_5_i686.whl", hash = "sha256:baf5215e0ab74c16e2dd324e8ec067ef59e41125d3eade2b863d294fd5035c92"},
{file = "cffi-2.0.0-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:730cacb21e1bdff3ce90babf007d0a0917cc3e6492f336c2f0134101e0944f93"},
{file = "cffi-2.0.0-cp311-cp311-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:6824f87845e3396029f3820c206e459ccc91760e8fa24422f8b0c3d1731cbec5"},
{file = "cffi-2.0.0-cp311-cp311-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:9de40a7b0323d889cf8d23d1ef214f565ab154443c42737dfe52ff82cf857664"},
{file = "cffi-2.0.0-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:8941aaadaf67246224cee8c3803777eed332a19d909b47e29c9842ef1e79ac26"},
{file = "cffi-2.0.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:a05d0c237b3349096d3981b727493e22147f934b20f6f125a3eba8f994bec4a9"},
{file = "cffi-2.0.0-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:94698a9c5f91f9d138526b48fe26a199609544591f859c870d477351dc7b2414"},
{file = "cffi-2.0.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:5fed36fccc0612a53f1d4d9a816b50a36702c28a2aa880cb8a122b3466638743"},
{file = "cffi-2.0.0-cp311-cp311-win32.whl", hash = "sha256:c649e3a33450ec82378822b3dad03cc228b8f5963c0c12fc3b1e0ab940f768a5"},
{file = "cffi-2.0.0-cp311-cp311-win_amd64.whl", hash = "sha256:66f011380d0e49ed280c789fbd08ff0d40968ee7b665575489afa95c98196ab5"},
{file = "cffi-2.0.0-cp311-cp311-win_arm64.whl", hash = "sha256:c6638687455baf640e37344fe26d37c404db8b80d037c3d29f58fe8d1c3b194d"},
{file = "cffi-2.0.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:6d02d6655b0e54f54c4ef0b94eb6be0607b70853c45ce98bd278dc7de718be5d"},
{file = "cffi-2.0.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:8eca2a813c1cb7ad4fb74d368c2ffbbb4789d377ee5bb8df98373c2cc0dee76c"},
{file = "cffi-2.0.0-cp312-cp312-manylinux1_i686.manylinux2014_i686.manylinux_2_17_i686.manylinux_2_5_i686.whl", hash = "sha256:21d1152871b019407d8ac3985f6775c079416c282e431a4da6afe7aefd2bccbe"},
{file = "cffi-2.0.0-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:b21e08af67b8a103c71a250401c78d5e0893beff75e28c53c98f4de42f774062"},
{file = "cffi-2.0.0-cp312-cp312-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:1e3a615586f05fc4065a8b22b8152f0c1b00cdbc60596d187c2a74f9e3036e4e"},
{file = "cffi-2.0.0-cp312-cp312-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:81afed14892743bbe14dacb9e36d9e0e504cd204e0b165062c488942b9718037"},
{file = "cffi-2.0.0-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:3e17ed538242334bf70832644a32a7aae3d83b57567f9fd60a26257e992b79ba"},
{file = "cffi-2.0.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:3925dd22fa2b7699ed2617149842d2e6adde22b262fcbfada50e3d195e4b3a94"},
{file = "cffi-2.0.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:2c8f814d84194c9ea681642fd164267891702542f028a15fc97d4674b6206187"},
{file = "cffi-2.0.0-cp312-cp312-win32.whl", hash = "sha256:da902562c3e9c550df360bfa53c035b2f241fed6d9aef119048073680ace4a18"},
{file = "cffi-2.0.0-cp312-cp312-win_amd64.whl", hash = "sha256:da68248800ad6320861f129cd9c1bf96ca849a2771a59e0344e88681905916f5"},
{file = "cffi-2.0.0-cp312-cp312-win_arm64.whl", hash = "sha256:4671d9dd5ec934cb9a73e7ee9676f9362aba54f7f34910956b84d727b0d73fb6"},
{file = "cffi-2.0.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:00bdf7acc5f795150faa6957054fbbca2439db2f775ce831222b66f192f03beb"},
{file = "cffi-2.0.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:45d5e886156860dc35862657e1494b9bae8dfa63bf56796f2fb56e1679fc0bca"},
{file = "cffi-2.0.0-cp313-cp313-manylinux1_i686.manylinux2014_i686.manylinux_2_17_i686.manylinux_2_5_i686.whl", hash = "sha256:07b271772c100085dd28b74fa0cd81c8fb1a3ba18b21e03d7c27f3436a10606b"},
{file = "cffi-2.0.0-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:d48a880098c96020b02d5a1f7d9251308510ce8858940e6fa99ece33f610838b"},
{file = "cffi-2.0.0-cp313-cp313-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:f93fd8e5c8c0a4aa1f424d6173f14a892044054871c771f8566e4008eaa359d2"},
{file = "cffi-2.0.0-cp313-cp313-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:dd4f05f54a52fb558f1ba9f528228066954fee3ebe629fc1660d874d040ae5a3"},
{file = "cffi-2.0.0-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:c8d3b5532fc71b7a77c09192b4a5a200ea992702734a2e9279a37f2478236f26"},
{file = "cffi-2.0.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:d9b29c1f0ae438d5ee9acb31cadee00a58c46cc9c0b2f9038c6b0b3470877a8c"},
{file = "cffi-2.0.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:6d50360be4546678fc1b79ffe7a66265e28667840010348dd69a314145807a1b"},
{file = "cffi-2.0.0-cp313-cp313-win32.whl", hash = "sha256:74a03b9698e198d47562765773b4a8309919089150a0bb17d829ad7b44b60d27"},
{file = "cffi-2.0.0-cp313-cp313-win_amd64.whl", hash = "sha256:19f705ada2530c1167abacb171925dd886168931e0a7b78f5bffcae5c6b5be75"},
{file = "cffi-2.0.0-cp313-cp313-win_arm64.whl", hash = "sha256:256f80b80ca3853f90c21b23ee78cd008713787b1b1e93eae9f3d6a7134abd91"},
{file = "cffi-2.0.0-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:fc33c5141b55ed366cfaad382df24fe7dcbc686de5be719b207bb248e3053dc5"},
{file = "cffi-2.0.0-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:c654de545946e0db659b3400168c9ad31b5d29593291482c43e3564effbcee13"},
{file = "cffi-2.0.0-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:24b6f81f1983e6df8db3adc38562c83f7d4a0c36162885ec7f7b77c7dcbec97b"},
{file = "cffi-2.0.0-cp314-cp314-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:12873ca6cb9b0f0d3a0da705d6086fe911591737a59f28b7936bdfed27c0d47c"},
{file = "cffi-2.0.0-cp314-cp314-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:d9b97165e8aed9272a6bb17c01e3cc5871a594a446ebedc996e2397a1c1ea8ef"},
{file = "cffi-2.0.0-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:afb8db5439b81cf9c9d0c80404b60c3cc9c3add93e114dcae767f1477cb53775"},
{file = "cffi-2.0.0-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:737fe7d37e1a1bffe70bd5754ea763a62a066dc5913ca57e957824b72a85e205"},
{file = "cffi-2.0.0-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:38100abb9d1b1435bc4cc340bb4489635dc2f0da7456590877030c9b3d40b0c1"},
{file = "cffi-2.0.0-cp314-cp314-win32.whl", hash = "sha256:087067fa8953339c723661eda6b54bc98c5625757ea62e95eb4898ad5e776e9f"},
{file = "cffi-2.0.0-cp314-cp314-win_amd64.whl", hash = "sha256:203a48d1fb583fc7d78a4c6655692963b860a417c0528492a6bc21f1aaefab25"},
{file = "cffi-2.0.0-cp314-cp314-win_arm64.whl", hash = "sha256:dbd5c7a25a7cb98f5ca55d258b103a2054f859a46ae11aaf23134f9cc0d356ad"},
{file = "cffi-2.0.0-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:9a67fc9e8eb39039280526379fb3a70023d77caec1852002b4da7e8b270c4dd9"},
{file = "cffi-2.0.0-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:7a66c7204d8869299919db4d5069a82f1561581af12b11b3c9f48c584eb8743d"},
{file = "cffi-2.0.0-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:7cc09976e8b56f8cebd752f7113ad07752461f48a58cbba644139015ac24954c"},
{file = "cffi-2.0.0-cp314-cp314t-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:92b68146a71df78564e4ef48af17551a5ddd142e5190cdf2c5624d0c3ff5b2e8"},
{file = "cffi-2.0.0-cp314-cp314t-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:b1e74d11748e7e98e2f426ab176d4ed720a64412b6a15054378afdb71e0f37dc"},
{file = "cffi-2.0.0-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:28a3a209b96630bca57cce802da70c266eb08c6e97e5afd61a75611ee6c64592"},
{file = "cffi-2.0.0-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:7553fb2090d71822f02c629afe6042c299edf91ba1bf94951165613553984512"},
{file = "cffi-2.0.0-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:6c6c373cfc5c83a975506110d17457138c8c63016b563cc9ed6e056a82f13ce4"},
{file = "cffi-2.0.0-cp314-cp314t-win32.whl", hash = "sha256:1fc9ea04857caf665289b7a75923f2c6ed559b8298a1b8c49e59f7dd95c8481e"},
{file = "cffi-2.0.0-cp314-cp314t-win_amd64.whl", hash = "sha256:d68b6cef7827e8641e8ef16f4494edda8b36104d79773a334beaa1e3521430f6"},
{file = "cffi-2.0.0-cp314-cp314t-win_arm64.whl", hash = "sha256:0a1527a803f0a659de1af2e1fd700213caba79377e27e4693648c2923da066f9"},
{file = "cffi-2.0.0-cp39-cp39-macosx_10_13_x86_64.whl", hash = "sha256:fe562eb1a64e67dd297ccc4f5addea2501664954f2692b69a76449ec7913ecbf"},
{file = "cffi-2.0.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:de8dad4425a6ca6e4e5e297b27b5c824ecc7581910bf9aee86cb6835e6812aa7"},
{file = "cffi-2.0.0-cp39-cp39-manylinux1_i686.manylinux2014_i686.manylinux_2_17_i686.manylinux_2_5_i686.whl", hash = "sha256:4647afc2f90d1ddd33441e5b0e85b16b12ddec4fca55f0d9671fef036ecca27c"},
{file = "cffi-2.0.0-cp39-cp39-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:3f4d46d8b35698056ec29bca21546e1551a205058ae1a181d871e278b0b28165"},
{file = "cffi-2.0.0-cp39-cp39-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:e6e73b9e02893c764e7e8d5bb5ce277f1a009cd5243f8228f75f842bf937c534"},
{file = "cffi-2.0.0-cp39-cp39-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:cb527a79772e5ef98fb1d700678fe031e353e765d1ca2d409c92263c6d43e09f"},
{file = "cffi-2.0.0-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:61d028e90346df14fedc3d1e5441df818d095f3b87d286825dfcbd6459b7ef63"},
{file = "cffi-2.0.0-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:0f6084a0ea23d05d20c3edcda20c3d006f9b6f3fefeac38f59262e10cef47ee2"},
{file = "cffi-2.0.0-cp39-cp39-musllinux_1_2_i686.whl", hash = "sha256:1cd13c99ce269b3ed80b417dcd591415d3372bcac067009b6e0f59c7d4015e65"},
{file = "cffi-2.0.0-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:89472c9762729b5ae1ad974b777416bfda4ac5642423fa93bd57a09204712322"},
{file = "cffi-2.0.0-cp39-cp39-win32.whl", hash = "sha256:2081580ebb843f759b9f617314a24ed5738c51d2aee65d31e02f6f7a2b97707a"},
{file = "cffi-2.0.0-cp39-cp39-win_amd64.whl", hash = "sha256:b882b3df248017dba09d6b16defe9b5c407fe32fc7c65a9c69798e6175601be9"},
{file = "cffi-2.0.0.tar.gz", hash = "sha256:44d1b5909021139fe36001ae048dbdde8214afa20200eda0f64c068cac5d5529"},
]
[package.dependencies]
pycparser = {version = "*", markers = "implementation_name != \"PyPy\""}
[[package]]
name = "click"
version = "8.3.1"
@@ -174,6 +305,78 @@ files = [
]
markers = {main = "platform_system == \"Windows\" or sys_platform == \"win32\"", dev = "sys_platform == \"win32\""}
[[package]]
name = "cryptography"
version = "46.0.7"
description = "cryptography is a package which provides cryptographic recipes and primitives to Python developers."
optional = false
python-versions = "!=3.9.0,!=3.9.1,>=3.8"
groups = ["main"]
files = [
{file = "cryptography-46.0.7-cp311-abi3-macosx_10_9_universal2.whl", hash = "sha256:ea42cbe97209df307fdc3b155f1b6fa2577c0defa8f1f7d3be7d31d189108ad4"},
{file = "cryptography-46.0.7-cp311-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:b36a4695e29fe69215d75960b22577197aca3f7a25b9cf9d165dcfe9d80bc325"},
{file = "cryptography-46.0.7-cp311-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:5ad9ef796328c5e3c4ceed237a183f5d41d21150f972455a9d926593a1dcb308"},
{file = "cryptography-46.0.7-cp311-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:73510b83623e080a2c35c62c15298096e2a5dc8d51c3b4e1740211839d0dea77"},
{file = "cryptography-46.0.7-cp311-abi3-manylinux_2_28_ppc64le.whl", hash = "sha256:cbd5fb06b62bd0721e1170273d3f4d5a277044c47ca27ee257025146c34cbdd1"},
{file = "cryptography-46.0.7-cp311-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:420b1e4109cc95f0e5700eed79908cef9268265c773d3a66f7af1eef53d409ef"},
{file = "cryptography-46.0.7-cp311-abi3-manylinux_2_31_armv7l.whl", hash = "sha256:24402210aa54baae71d99441d15bb5a1919c195398a87b563df84468160a65de"},
{file = "cryptography-46.0.7-cp311-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:8a469028a86f12eb7d2fe97162d0634026d92a21f3ae0ac87ed1c4a447886c83"},
{file = "cryptography-46.0.7-cp311-abi3-manylinux_2_34_ppc64le.whl", hash = "sha256:9694078c5d44c157ef3162e3bf3946510b857df5a3955458381d1c7cfc143ddb"},
{file = "cryptography-46.0.7-cp311-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:42a1e5f98abb6391717978baf9f90dc28a743b7d9be7f0751a6f56a75d14065b"},
{file = "cryptography-46.0.7-cp311-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:91bbcb08347344f810cbe49065914fe048949648f6bd5c2519f34619142bbe85"},
{file = "cryptography-46.0.7-cp311-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:5d1c02a14ceb9148cc7816249f64f623fbfee39e8c03b3650d842ad3f34d637e"},
{file = "cryptography-46.0.7-cp311-abi3-win32.whl", hash = "sha256:d23c8ca48e44ee015cd0a54aeccdf9f09004eba9fc96f38c911011d9ff1bd457"},
{file = "cryptography-46.0.7-cp311-abi3-win_amd64.whl", hash = "sha256:397655da831414d165029da9bc483bed2fe0e75dde6a1523ec2fe63f3c46046b"},
{file = "cryptography-46.0.7-cp314-cp314t-macosx_10_9_universal2.whl", hash = "sha256:d151173275e1728cf7839aaa80c34fe550c04ddb27b34f48c232193df8db5842"},
{file = "cryptography-46.0.7-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:db0f493b9181c7820c8134437eb8b0b4792085d37dbb24da050476ccb664e59c"},
{file = "cryptography-46.0.7-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:ebd6daf519b9f189f85c479427bbd6e9c9037862cf8fe89ee35503bd209ed902"},
{file = "cryptography-46.0.7-cp314-cp314t-manylinux_2_28_aarch64.whl", hash = "sha256:b7b412817be92117ec5ed95f880defe9cf18a832e8cafacf0a22337dc1981b4d"},
{file = "cryptography-46.0.7-cp314-cp314t-manylinux_2_28_ppc64le.whl", hash = "sha256:fbfd0e5f273877695cb93baf14b185f4878128b250cc9f8e617ea0c025dfb022"},
{file = "cryptography-46.0.7-cp314-cp314t-manylinux_2_28_x86_64.whl", hash = "sha256:ffca7aa1d00cf7d6469b988c581598f2259e46215e0140af408966a24cf086ce"},
{file = "cryptography-46.0.7-cp314-cp314t-manylinux_2_31_armv7l.whl", hash = "sha256:60627cf07e0d9274338521205899337c5d18249db56865f943cbe753aa96f40f"},
{file = "cryptography-46.0.7-cp314-cp314t-manylinux_2_34_aarch64.whl", hash = "sha256:80406c3065e2c55d7f49a9550fe0c49b3f12e5bfff5dedb727e319e1afb9bf99"},
{file = "cryptography-46.0.7-cp314-cp314t-manylinux_2_34_ppc64le.whl", hash = "sha256:c5b1ccd1239f48b7151a65bc6dd54bcfcc15e028c8ac126d3fada09db0e07ef1"},
{file = "cryptography-46.0.7-cp314-cp314t-manylinux_2_34_x86_64.whl", hash = "sha256:d5f7520159cd9c2154eb61eb67548ca05c5774d39e9c2c4339fd793fe7d097b2"},
{file = "cryptography-46.0.7-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:fcd8eac50d9138c1d7fc53a653ba60a2bee81a505f9f8850b6b2888555a45d0e"},
{file = "cryptography-46.0.7-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:65814c60f8cc400c63131584e3e1fad01235edba2614b61fbfbfa954082db0ee"},
{file = "cryptography-46.0.7-cp314-cp314t-win32.whl", hash = "sha256:fdd1736fed309b4300346f88f74cd120c27c56852c3838cab416e7a166f67298"},
{file = "cryptography-46.0.7-cp314-cp314t-win_amd64.whl", hash = "sha256:e06acf3c99be55aa3b516397fe42f5855597f430add9c17fa46bf2e0fb34c9bb"},
{file = "cryptography-46.0.7-cp38-abi3-macosx_10_9_universal2.whl", hash = "sha256:462ad5cb1c148a22b2e3bcc5ad52504dff325d17daf5df8d88c17dda1f75f2a4"},
{file = "cryptography-46.0.7-cp38-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:84d4cced91f0f159a7ddacad249cc077e63195c36aac40b4150e7a57e84fffe7"},
{file = "cryptography-46.0.7-cp38-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:128c5edfe5e5938b86b03941e94fac9ee793a94452ad1365c9fc3f4f62216832"},
{file = "cryptography-46.0.7-cp38-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:5e51be372b26ef4ba3de3c167cd3d1022934bc838ae9eaad7e644986d2a3d163"},
{file = "cryptography-46.0.7-cp38-abi3-manylinux_2_28_ppc64le.whl", hash = "sha256:cdf1a610ef82abb396451862739e3fc93b071c844399e15b90726ef7470eeaf2"},
{file = "cryptography-46.0.7-cp38-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:1d25aee46d0c6f1a501adcddb2d2fee4b979381346a78558ed13e50aa8a59067"},
{file = "cryptography-46.0.7-cp38-abi3-manylinux_2_31_armv7l.whl", hash = "sha256:cdfbe22376065ffcf8be74dc9a909f032df19bc58a699456a21712d6e5eabfd0"},
{file = "cryptography-46.0.7-cp38-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:abad9dac36cbf55de6eb49badd4016806b3165d396f64925bf2999bcb67837ba"},
{file = "cryptography-46.0.7-cp38-abi3-manylinux_2_34_ppc64le.whl", hash = "sha256:935ce7e3cfdb53e3536119a542b839bb94ec1ad081013e9ab9b7cfd478b05006"},
{file = "cryptography-46.0.7-cp38-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:35719dc79d4730d30f1c2b6474bd6acda36ae2dfae1e3c16f2051f215df33ce0"},
{file = "cryptography-46.0.7-cp38-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:7bbc6ccf49d05ac8f7d7b5e2e2c33830d4fe2061def88210a126d130d7f71a85"},
{file = "cryptography-46.0.7-cp38-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:a1529d614f44b863a7b480c6d000fe93b59acee9c82ffa027cfadc77521a9f5e"},
{file = "cryptography-46.0.7-cp38-abi3-win32.whl", hash = "sha256:f247c8c1a1fb45e12586afbb436ef21ff1e80670b2861a90353d9b025583d246"},
{file = "cryptography-46.0.7-cp38-abi3-win_amd64.whl", hash = "sha256:506c4ff91eff4f82bdac7633318a526b1d1309fc07ca76a3ad182cb5b686d6d3"},
{file = "cryptography-46.0.7-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:fc9ab8856ae6cf7c9358430e49b368f3108f050031442eaeb6b9d87e4dcf4e4f"},
{file = "cryptography-46.0.7-pp311-pypy311_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:d3b99c535a9de0adced13d159c5a9cf65c325601aa30f4be08afd680643e9c15"},
{file = "cryptography-46.0.7-pp311-pypy311_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:d02c738dacda7dc2a74d1b2b3177042009d5cab7c7079db74afc19e56ca1b455"},
{file = "cryptography-46.0.7-pp311-pypy311_pp73-manylinux_2_34_aarch64.whl", hash = "sha256:04959522f938493042d595a736e7dbdff6eb6cc2339c11465b3ff89343b65f65"},
{file = "cryptography-46.0.7-pp311-pypy311_pp73-manylinux_2_34_x86_64.whl", hash = "sha256:3986ac1dee6def53797289999eabe84798ad7817f3e97779b5061a95b0ee4968"},
{file = "cryptography-46.0.7-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:258514877e15963bd43b558917bc9f54cf7cf866c38aa576ebf47a77ddbc43a4"},
{file = "cryptography-46.0.7.tar.gz", hash = "sha256:e4cfd68c5f3e0bfdad0d38e023239b96a2fe84146481852dffbcca442c245aa5"},
]
[package.dependencies]
cffi = {version = ">=2.0.0", markers = "python_full_version >= \"3.9.0\" and platform_python_implementation != \"PyPy\""}
[package.extras]
docs = ["sphinx (>=5.3.0)", "sphinx-inline-tabs", "sphinx-rtd-theme (>=3.0.0)"]
docstest = ["pyenchant (>=3)", "readme-renderer (>=30.0)", "sphinxcontrib-spelling (>=7.3.1)"]
nox = ["nox[uv] (>=2024.4.15)"]
pep8test = ["check-sdist", "click (>=8.0.1)", "mypy (>=1.14)", "ruff (>=0.11.11)"]
sdist = ["build (>=1.0.0)"]
ssh = ["bcrypt (>=3.1.5)"]
test = ["certifi (>=2024)", "cryptography-vectors (==46.0.7)", "pretend (>=0.7)", "pytest (>=7.4.0)", "pytest-benchmark (>=4.0)", "pytest-cov (>=2.10.1)", "pytest-xdist (>=3.5.0)"]
test-randomorder = ["pytest-randomly"]
[[package]]
name = "dnspython"
version = "2.8.0"
@@ -485,6 +688,26 @@ MarkupSafe = ">=2.0"
[package.extras]
i18n = ["Babel (>=2.7)"]
[[package]]
name = "mako"
version = "1.3.11"
description = "A super-fast templating language that borrows the best ideas from the existing templating languages."
optional = false
python-versions = ">=3.8"
groups = ["main"]
files = [
{file = "mako-1.3.11-py3-none-any.whl", hash = "sha256:e372c6e333cf004aa736a15f425087ec977e1fcbd2966aae7f17c8dc1da27a77"},
{file = "mako-1.3.11.tar.gz", hash = "sha256:071eb4ab4c5010443152255d77db7faa6ce5916f35226eb02dc34479b6858069"},
]
[package.dependencies]
MarkupSafe = ">=0.9.2"
[package.extras]
babel = ["Babel"]
lingua = ["lingua"]
testing = ["pytest"]
[[package]]
name = "markdown-it-py"
version = "4.0.0"
@@ -648,6 +871,19 @@ files = [
dev = ["pre-commit", "tox"]
testing = ["coverage", "pytest", "pytest-benchmark"]
[[package]]
name = "pycparser"
version = "3.0"
description = "C parser in Python"
optional = false
python-versions = ">=3.10"
groups = ["main"]
markers = "platform_python_implementation != \"PyPy\" and implementation_name != \"PyPy\""
files = [
{file = "pycparser-3.0-py3-none-any.whl", hash = "sha256:b727414169a36b7d524c1c3e31839a521725078d7b2ff038656844266160a992"},
{file = "pycparser-3.0.tar.gz", hash = "sha256:600f49d217304a5902ac3c37e1281c9fe94e4d0489de643a9504c5cdfdfc6b29"},
]
[[package]]
name = "pydantic"
version = "2.12.5"
@@ -899,18 +1135,6 @@ files = [
[package.extras]
cli = ["click (>=5.0)"]
[[package]]
name = "python-json-logger"
version = "2.0.7"
description = "A python library adding a json log formatter"
optional = false
python-versions = ">=3.6"
groups = ["main"]
files = [
{file = "python-json-logger-2.0.7.tar.gz", hash = "sha256:23e7ec02d34237c5aa1e29a070193a4ea87583bb4e7f8fd06d3de8264c4b2e1c"},
{file = "python_json_logger-2.0.7-py3-none-any.whl", hash = "sha256:f380b826a991ebbe3de4d897aeec42760035ac760345e57b812938dc8b35e2bd"},
]
[[package]]
name = "python-multipart"
version = "0.0.22"
@@ -1006,6 +1230,26 @@ files = [
{file = "pyyaml-6.0.3.tar.gz", hash = "sha256:d76623373421df22fb4cf8817020cbb7ef15c725b9d5e45f17e189bfc384190f"},
]
[[package]]
name = "redis"
version = "7.4.0"
description = "Python client for Redis database and key-value store"
optional = false
python-versions = ">=3.10"
groups = ["main"]
files = [
{file = "redis-7.4.0-py3-none-any.whl", hash = "sha256:a9c74a5c893a5ef8455a5adb793a31bb70feb821c86eccb62eebef5a19c429ec"},
{file = "redis-7.4.0.tar.gz", hash = "sha256:64a6ea7bf567ad43c964d2c30d82853f8df927c5c9017766c55a1d1ed95d18ad"},
]
[package.extras]
circuit-breaker = ["pybreaker (>=1.4.0)"]
hiredis = ["hiredis (>=3.2.0)"]
jwt = ["pyjwt (>=2.9.0)"]
ocsp = ["cryptography (>=36.0.1)", "pyopenssl (>=20.0.1)", "requests (>=2.31.0)"]
otel = ["opentelemetry-api (>=1.39.1)", "opentelemetry-exporter-otlp-proto-http (>=1.39.1)", "opentelemetry-sdk (>=1.39.1)"]
xxhash = ["xxhash (>=3.6.0,<3.7.0)"]
[[package]]
name = "rich"
version = "14.3.3"
@@ -1534,4 +1778,4 @@ files = [
[metadata]
lock-version = "2.1"
python-versions = "^3.13"
content-hash = "b00e1e6ef14151983e360a24c59c162a76aa5c8b5d89cd00eb1e8e895c481257"
content-hash = "cad2ccefef53efb63f35cd290d6ac615249b66cf5571ae3e0d930ce2f809a49f"

View File

@@ -17,7 +17,8 @@ aiosqlite = "^0.20.0"
sqlmodel = "^0.0.16"
httpx = "^0.27.0"
python-dotenv = "^1.0.1"
asyncpg = "^0.29.0"
asyncpg = "^0.30.0"
alembic = "^1.13.0"
aitbc-core = {path = "../../packages/py/aitbc-core"}
[tool.poetry.group.dev.dependencies]

View File

@@ -8,6 +8,7 @@ from ..database import close_engine, create_engine
from ..redis_cache import close_redis, create_redis
from ..settings import settings
from .routers import health_router, match_router, metrics_router, services, ui, validation
from .routers.sla import router as sla_router
@asynccontextmanager
@@ -28,6 +29,7 @@ app.include_router(metrics_router)
app.include_router(services, prefix="/v1")
app.include_router(ui)
app.include_router(validation, prefix="/v1")
app.include_router(sla_router)
def create_app() -> FastAPI:

View File

@@ -0,0 +1,357 @@
"""
SLA and Billing API Endpoints for Pool-Hub
Provides endpoints for SLA metrics, capacity planning, and billing integration.
"""
import logging
from datetime import datetime, timedelta
from typing import Dict, List, Optional, Any
from decimal import Decimal
from fastapi import APIRouter, Depends, HTTPException, Query
from pydantic import BaseModel, Field
from sqlalchemy.orm import Session
from ..database import get_db
from ..services.sla_collector import SLACollector
from ..services.billing_integration import BillingIntegration
from ..models import SLAMetric, SLAViolation, CapacitySnapshot
logger = logging.getLogger(__name__)
router = APIRouter(prefix="/sla", tags=["SLA"])
# Request/Response Models
class SLAMetricResponse(BaseModel):
id: str
miner_id: str
metric_type: str
metric_value: float
threshold: float
is_violation: bool
timestamp: datetime
metadata: Dict[str, str]
class Config:
from_attributes = True
class SLAViolationResponse(BaseModel):
id: str
miner_id: str
violation_type: str
severity: str
metric_value: float
threshold: float
created_at: datetime
resolved_at: Optional[datetime]
class Config:
from_attributes = True
class CapacitySnapshotResponse(BaseModel):
id: str
total_miners: int
active_miners: int
total_parallel_capacity: int
total_queue_length: int
capacity_utilization_pct: float
forecast_capacity: int
recommended_scaling: str
scaling_reason: str
timestamp: datetime
class Config:
from_attributes = True
class UsageSyncRequest(BaseModel):
miner_id: Optional[str] = None
hours_back: int = Field(default=24, ge=1, le=168)
class UsageRecordRequest(BaseModel):
tenant_id: str
resource_type: str
quantity: Decimal
unit_price: Optional[Decimal] = None
job_id: Optional[str] = None
metadata: Dict[str, Any] = Field(default_factory=dict)
class InvoiceGenerationRequest(BaseModel):
tenant_id: str
period_start: datetime
period_end: datetime
# Dependency injection
def get_sla_collector(db: Session = Depends(get_db)) -> SLACollector:
return SLACollector(db)
def get_billing_integration(db: Session = Depends(get_db)) -> BillingIntegration:
return BillingIntegration(db)
# SLA Metrics Endpoints
@router.get("/metrics/{miner_id}", response_model=List[SLAMetricResponse])
async def get_miner_sla_metrics(
miner_id: str,
hours: int = Query(default=24, ge=1, le=168),
sla_collector: SLACollector = Depends(get_sla_collector),
):
"""Get SLA metrics for a specific miner"""
try:
metrics = await sla_collector.get_sla_metrics(miner_id=miner_id, hours=hours)
return metrics
except Exception as e:
logger.error(f"Error getting SLA metrics for miner {miner_id}: {e}")
raise HTTPException(status_code=500, detail=str(e))
@router.get("/metrics", response_model=List[SLAMetricResponse])
async def get_all_sla_metrics(
hours: int = Query(default=24, ge=1, le=168),
sla_collector: SLACollector = Depends(get_sla_collector),
):
"""Get SLA metrics across all miners"""
try:
metrics = await sla_collector.get_sla_metrics(miner_id=None, hours=hours)
return metrics
except Exception as e:
logger.error(f"Error getting SLA metrics: {e}")
raise HTTPException(status_code=500, detail=str(e))
@router.get("/violations", response_model=List[SLAViolationResponse])
async def get_sla_violations(
miner_id: Optional[str] = Query(default=None),
resolved: bool = Query(default=False),
db: Session = Depends(get_db),
):
"""Get SLA violations"""
try:
sla_collector = SLACollector(db)
violations = await sla_collector.get_sla_violations(
miner_id=miner_id, resolved=resolved
)
return violations
except Exception as e:
logger.error(f"Error getting SLA violations: {e}")
raise HTTPException(status_code=500, detail=str(e))
@router.post("/metrics/collect")
async def collect_sla_metrics(
sla_collector: SLACollector = Depends(get_sla_collector),
):
"""Trigger SLA metrics collection for all miners"""
try:
results = await sla_collector.collect_all_miner_metrics()
return results
except Exception as e:
logger.error(f"Error collecting SLA metrics: {e}")
raise HTTPException(status_code=500, detail=str(e))
# Capacity Planning Endpoints
@router.get("/capacity/snapshots", response_model=List[CapacitySnapshotResponse])
async def get_capacity_snapshots(
hours: int = Query(default=24, ge=1, le=168),
db: Session = Depends(get_db),
):
"""Get capacity planning snapshots"""
try:
cutoff = datetime.utcnow() - timedelta(hours=hours)
stmt = (
db.query(CapacitySnapshot)
.filter(CapacitySnapshot.timestamp >= cutoff)
.order_by(CapacitySnapshot.timestamp.desc())
)
snapshots = stmt.all()
return snapshots
except Exception as e:
logger.error(f"Error getting capacity snapshots: {e}")
raise HTTPException(status_code=500, detail=str(e))
@router.get("/capacity/forecast")
async def get_capacity_forecast(
hours_ahead: int = Query(default=168, ge=1, le=8760),
billing_integration: BillingIntegration = Depends(get_billing_integration),
):
"""Get capacity forecast from coordinator-api"""
try:
# This would call coordinator-api's capacity planning endpoint
# For now, return a placeholder response
return {
"forecast_horizon_hours": hours_ahead,
"current_capacity": 1000,
"projected_capacity": 1500,
"recommended_scaling": "+50%",
"confidence": 0.85,
"source": "coordinator_api",
}
except Exception as e:
logger.error(f"Error getting capacity forecast: {e}")
raise HTTPException(status_code=500, detail=str(e))
@router.get("/capacity/recommendations")
async def get_scaling_recommendations(
billing_integration: BillingIntegration = Depends(get_billing_integration),
):
"""Get auto-scaling recommendations from coordinator-api"""
try:
# This would call coordinator-api's capacity planning endpoint
# For now, return a placeholder response
return {
"current_state": "healthy",
"recommendations": [
{
"action": "add_miners",
"quantity": 2,
"reason": "Projected capacity shortage in 2 weeks",
"priority": "medium",
}
],
"source": "coordinator_api",
}
except Exception as e:
logger.error(f"Error getting scaling recommendations: {e}")
raise HTTPException(status_code=500, detail=str(e))
@router.post("/capacity/alerts/configure")
async def configure_capacity_alerts(
alert_config: Dict[str, Any],
db: Session = Depends(get_db),
):
"""Configure capacity alerts"""
try:
# Store alert configuration (would be persisted to database)
return {
"status": "configured",
"alert_config": alert_config,
"timestamp": datetime.utcnow().isoformat(),
}
except Exception as e:
logger.error(f"Error configuring capacity alerts: {e}")
raise HTTPException(status_code=500, detail=str(e))
# Billing Integration Endpoints
@router.get("/billing/usage")
async def get_billing_usage(
tenant_id: Optional[str] = Query(default=None),
hours: int = Query(default=24, ge=1, le=168),
billing_integration: BillingIntegration = Depends(get_billing_integration),
):
"""Get billing usage data from coordinator-api"""
try:
metrics = await billing_integration.get_billing_metrics(
tenant_id=tenant_id, hours=hours
)
return metrics
except Exception as e:
logger.error(f"Error getting billing usage: {e}")
raise HTTPException(status_code=500, detail=str(e))
@router.post("/billing/sync")
async def sync_billing_usage(
request: UsageSyncRequest,
billing_integration: BillingIntegration = Depends(get_billing_integration),
):
"""Trigger billing sync with coordinator-api"""
try:
if request.miner_id:
# Sync specific miner
end_date = datetime.utcnow()
start_date = end_date - timedelta(hours=request.hours_back)
result = await billing_integration.sync_miner_usage(
miner_id=request.miner_id, start_date=start_date, end_date=end_date
)
else:
# Sync all miners
result = await billing_integration.sync_all_miners_usage(
hours_back=request.hours_back
)
return result
except Exception as e:
logger.error(f"Error syncing billing usage: {e}")
raise HTTPException(status_code=500, detail=str(e))
@router.post("/billing/usage/record")
async def record_usage(
request: UsageRecordRequest,
billing_integration: BillingIntegration = Depends(get_billing_integration),
):
"""Record a single usage event to coordinator-api billing"""
try:
result = await billing_integration.record_usage(
tenant_id=request.tenant_id,
resource_type=request.resource_type,
quantity=request.quantity,
unit_price=request.unit_price,
job_id=request.job_id,
metadata=request.metadata,
)
return result
except Exception as e:
logger.error(f"Error recording usage: {e}")
raise HTTPException(status_code=500, detail=str(e))
@router.post("/billing/invoice/generate")
async def generate_invoice(
request: InvoiceGenerationRequest,
billing_integration: BillingIntegration = Depends(get_billing_integration),
):
"""Trigger invoice generation in coordinator-api"""
try:
result = await billing_integration.trigger_invoice_generation(
tenant_id=request.tenant_id,
period_start=request.period_start,
period_end=request.period_end,
)
return result
except Exception as e:
logger.error(f"Error generating invoice: {e}")
raise HTTPException(status_code=500, detail=str(e))
# Health and Status Endpoints
@router.get("/status")
async def get_sla_status(db: Session = Depends(get_db)):
"""Get overall SLA status"""
try:
sla_collector = SLACollector(db)
# Get recent violations
active_violations = await sla_collector.get_sla_violations(resolved=False)
# Get recent metrics
recent_metrics = await sla_collector.get_sla_metrics(hours=1)
# Calculate overall status
if any(v.severity == "critical" for v in active_violations):
status = "critical"
elif any(v.severity == "high" for v in active_violations):
status = "degraded"
else:
status = "healthy"
return {
"status": status,
"active_violations": len(active_violations),
"recent_metrics_count": len(recent_metrics),
"timestamp": datetime.utcnow().isoformat(),
}
except Exception as e:
logger.error(f"Error getting SLA status: {e}")
raise HTTPException(status_code=500, detail=str(e))

View File

@@ -11,10 +11,11 @@ from sqlalchemy import (
Float,
ForeignKey,
Integer,
JSON,
String,
Text,
)
from sqlalchemy.dialects.postgresql import JSONB, UUID as PGUUID
from sqlalchemy.dialects.postgresql import UUID as PGUUID
from sqlalchemy.orm import DeclarativeBase, Mapped, mapped_column, relationship
from uuid import uuid4
@@ -50,8 +51,8 @@ class Miner(Base):
ram_gb: Mapped[float] = mapped_column(Float)
max_parallel: Mapped[int] = mapped_column(Integer)
base_price: Mapped[float] = mapped_column(Float)
tags: Mapped[Dict[str, str]] = mapped_column(JSONB, default=dict)
capabilities: Mapped[List[str]] = mapped_column(JSONB, default=list)
tags: Mapped[Dict[str, str]] = mapped_column(JSON, default=dict)
capabilities: Mapped[List[str]] = mapped_column(JSON, default=list)
trust_score: Mapped[float] = mapped_column(Float, default=0.5)
region: Mapped[Optional[str]] = mapped_column(String(64))
@@ -74,6 +75,8 @@ class MinerStatus(Base):
avg_latency_ms: Mapped[Optional[int]] = mapped_column(Integer)
temp_c: Mapped[Optional[int]] = mapped_column(Integer)
mem_free_gb: Mapped[Optional[float]] = mapped_column(Float)
uptime_pct: Mapped[Optional[float]] = mapped_column(Float) # SLA metric
last_heartbeat_at: Mapped[Optional[dt.datetime]] = mapped_column(DateTime(timezone=True))
updated_at: Mapped[dt.datetime] = mapped_column(
DateTime(timezone=True), default=dt.datetime.utcnow, onupdate=dt.datetime.utcnow
)
@@ -88,8 +91,8 @@ class MatchRequest(Base):
PGUUID(as_uuid=True), primary_key=True, default=uuid4
)
job_id: Mapped[str] = mapped_column(String(64), nullable=False)
requirements: Mapped[Dict[str, object]] = mapped_column(JSONB, nullable=False)
hints: Mapped[Dict[str, object]] = mapped_column(JSONB, default=dict)
requirements: Mapped[Dict[str, object]] = mapped_column(JSON, nullable=False)
hints: Mapped[Dict[str, object]] = mapped_column(JSON, default=dict)
top_k: Mapped[int] = mapped_column(Integer, default=1)
created_at: Mapped[dt.datetime] = mapped_column(
DateTime(timezone=True), default=dt.datetime.utcnow
@@ -156,9 +159,9 @@ class ServiceConfig(Base):
)
service_type: Mapped[str] = mapped_column(String(32), nullable=False)
enabled: Mapped[bool] = mapped_column(Boolean, default=False)
config: Mapped[Dict[str, Any]] = mapped_column(JSONB, default=dict)
pricing: Mapped[Dict[str, Any]] = mapped_column(JSONB, default=dict)
capabilities: Mapped[List[str]] = mapped_column(JSONB, default=list)
config: Mapped[Dict[str, Any]] = mapped_column(JSON, default=dict)
pricing: Mapped[Dict[str, Any]] = mapped_column(JSON, default=dict)
capabilities: Mapped[List[str]] = mapped_column(JSON, default=list)
max_concurrent: Mapped[int] = mapped_column(Integer, default=1)
created_at: Mapped[dt.datetime] = mapped_column(
DateTime(timezone=True), default=dt.datetime.utcnow
@@ -171,3 +174,73 @@ class ServiceConfig(Base):
__table_args__ = ({"schema": None},)
miner: Mapped[Miner] = relationship(backref="service_configs")
class SLAMetric(Base):
"""SLA metrics tracking for miners"""
__tablename__ = "sla_metrics"
id: Mapped[PGUUID] = mapped_column(
PGUUID(as_uuid=True), primary_key=True, default=uuid4
)
miner_id: Mapped[str] = mapped_column(
ForeignKey("miners.miner_id", ondelete="CASCADE"), nullable=False
)
metric_type: Mapped[str] = mapped_column(String(32), nullable=False) # uptime, response_time, completion_rate, capacity
metric_value: Mapped[float] = mapped_column(Float, nullable=False)
threshold: Mapped[float] = mapped_column(Float, nullable=False)
is_violation: Mapped[bool] = mapped_column(Boolean, default=False)
timestamp: Mapped[dt.datetime] = mapped_column(
DateTime(timezone=True), default=dt.datetime.utcnow
)
meta_data: Mapped[Dict[str, str]] = mapped_column(JSON, default=dict)
miner: Mapped[Miner] = relationship(backref="sla_metrics")
class SLAViolation(Base):
"""SLA violation tracking"""
__tablename__ = "sla_violations"
id: Mapped[PGUUID] = mapped_column(
PGUUID(as_uuid=True), primary_key=True, default=uuid4
)
miner_id: Mapped[str] = mapped_column(
ForeignKey("miners.miner_id", ondelete="CASCADE"), nullable=False
)
violation_type: Mapped[str] = mapped_column(String(32), nullable=False)
severity: Mapped[str] = mapped_column(String(16), nullable=False) # critical, high, medium, low
metric_value: Mapped[float] = mapped_column(Float, nullable=False)
threshold: Mapped[float] = mapped_column(Float, nullable=False)
violation_duration_ms: Mapped[Optional[int]] = mapped_column(Integer)
resolved_at: Mapped[Optional[dt.datetime]] = mapped_column(DateTime(timezone=True))
created_at: Mapped[dt.datetime] = mapped_column(
DateTime(timezone=True), default=dt.datetime.utcnow
)
meta_data: Mapped[Dict[str, str]] = mapped_column(JSON, default=dict)
miner: Mapped[Miner] = relationship(backref="sla_violations")
class CapacitySnapshot(Base):
"""Capacity planning snapshots"""
__tablename__ = "capacity_snapshots"
id: Mapped[PGUUID] = mapped_column(
PGUUID(as_uuid=True), primary_key=True, default=uuid4
)
total_miners: Mapped[int] = mapped_column(Integer, nullable=False)
active_miners: Mapped[int] = mapped_column(Integer, nullable=False)
total_parallel_capacity: Mapped[int] = mapped_column(Integer, nullable=False)
total_queue_length: Mapped[int] = mapped_column(Integer, nullable=False)
capacity_utilization_pct: Mapped[float] = mapped_column(Float, nullable=False)
forecast_capacity: Mapped[int] = mapped_column(Integer, nullable=False)
recommended_scaling: Mapped[str] = mapped_column(String(32), nullable=False)
scaling_reason: Mapped[str] = mapped_column(Text)
timestamp: Mapped[dt.datetime] = mapped_column(
DateTime(timezone=True), default=dt.datetime.utcnow
)
meta_data: Mapped[Dict[str, Any]] = mapped_column(JSON, default=dict)

View File

@@ -0,0 +1,325 @@
"""
Billing Integration Service for Pool-Hub
Integrates pool-hub usage data with coordinator-api's billing system.
"""
import asyncio
import logging
from datetime import datetime, timedelta
from decimal import Decimal
from typing import Dict, List, Optional, Any
import httpx
from sqlalchemy import and_, func, select
from sqlalchemy.orm import Session
from ..models import Miner, ServiceConfig, MatchRequest, MatchResult, Feedback
from ..settings import settings
logger = logging.getLogger(__name__)
class BillingIntegration:
"""Service for integrating pool-hub with coordinator-api billing"""
def __init__(self, db: Session):
self.db = db
self.coordinator_billing_url = getattr(
settings, "coordinator_billing_url", "http://localhost:8011"
)
self.coordinator_api_key = getattr(
settings, "coordinator_api_key", None
)
self.logger = logging.getLogger(__name__)
# Resource type mappings
self.resource_type_mapping = {
"gpu_hours": "gpu_hours",
"storage_gb": "storage_gb",
"api_calls": "api_calls",
"compute_hours": "compute_hours",
}
# Pricing configuration (fallback if coordinator-api pricing not available)
self.fallback_pricing = {
"gpu_hours": {"unit_price": Decimal("0.50")},
"storage_gb": {"unit_price": Decimal("0.02")},
"api_calls": {"unit_price": Decimal("0.0001")},
"compute_hours": {"unit_price": Decimal("0.30")},
}
async def record_usage(
self,
tenant_id: str,
resource_type: str,
quantity: Decimal,
unit_price: Optional[Decimal] = None,
job_id: Optional[str] = None,
metadata: Optional[Dict[str, Any]] = None,
) -> Dict[str, Any]:
"""Record usage data to coordinator-api billing system"""
# Use fallback pricing if not provided
if not unit_price:
pricing_config = self.fallback_pricing.get(resource_type, {})
unit_price = pricing_config.get("unit_price", Decimal("0"))
# Calculate total cost
total_cost = unit_price * quantity
# Prepare billing event payload
billing_event = {
"tenant_id": tenant_id,
"event_type": "usage",
"resource_type": resource_type,
"quantity": float(quantity),
"unit_price": float(unit_price),
"total_amount": float(total_cost),
"currency": "USD",
"timestamp": datetime.utcnow().isoformat(),
"metadata": metadata or {},
}
if job_id:
billing_event["job_id"] = job_id
# Send to coordinator-api
try:
response = await self._send_billing_event(billing_event)
self.logger.info(
f"Recorded usage: tenant={tenant_id}, resource={resource_type}, "
f"quantity={quantity}, cost={total_cost}"
)
return response
except Exception as e:
self.logger.error(f"Failed to record usage: {e}")
# Queue for retry in production
return {"status": "failed", "error": str(e)}
async def sync_miner_usage(
self, miner_id: str, start_date: datetime, end_date: datetime
) -> Dict[str, Any]:
"""Sync usage data for a miner to coordinator-api billing"""
# Get miner information
stmt = select(Miner).where(Miner.miner_id == miner_id)
miner = self.db.execute(stmt).scalar_one_or_none()
if not miner:
raise ValueError(f"Miner not found: {miner_id}")
# Map miner to tenant (simplified - in production, use proper mapping)
tenant_id = miner_id # For now, use miner_id as tenant_id
# Collect usage data from pool-hub
usage_data = await self._collect_miner_usage(miner_id, start_date, end_date)
# Send each usage record to coordinator-api
results = []
for resource_type, quantity in usage_data.items():
if quantity > 0:
result = await self.record_usage(
tenant_id=tenant_id,
resource_type=resource_type,
quantity=Decimal(str(quantity)),
metadata={"miner_id": miner_id, "sync_type": "miner_usage"},
)
results.append(result)
return {
"miner_id": miner_id,
"tenant_id": tenant_id,
"period": {"start": start_date.isoformat(), "end": end_date.isoformat()},
"usage_records": len(results),
"results": results,
}
async def sync_all_miners_usage(
self, hours_back: int = 24
) -> Dict[str, Any]:
"""Sync usage data for all miners to coordinator-api billing"""
end_date = datetime.utcnow()
start_date = end_date - timedelta(hours=hours_back)
# Get all miners
stmt = select(Miner)
miners = self.db.execute(stmt).scalars().all()
results = {
"sync_period": {"start": start_date.isoformat(), "end": end_date.isoformat()},
"miners_processed": 0,
"miners_failed": 0,
"total_usage_records": 0,
"details": [],
}
for miner in miners:
try:
result = await self.sync_miner_usage(miner.miner_id, start_date, end_date)
results["details"].append(result)
results["miners_processed"] += 1
results["total_usage_records"] += result["usage_records"]
except Exception as e:
self.logger.error(f"Failed to sync usage for miner {miner.miner_id}: {e}")
results["miners_failed"] += 1
self.logger.info(
f"Usage sync complete: processed={results['miners_processed']}, "
f"failed={results['miners_failed']}, records={results['total_usage_records']}"
)
return results
async def _collect_miner_usage(
self, miner_id: str, start_date: datetime, end_date: datetime
) -> Dict[str, float]:
"""Collect usage data for a miner from pool-hub"""
usage_data = {
"gpu_hours": 0.0,
"api_calls": 0.0,
"compute_hours": 0.0,
}
# Count match requests as API calls
stmt = select(func.count(MatchRequest.id)).where(
and_(
MatchRequest.created_at >= start_date,
MatchRequest.created_at <= end_date,
)
)
# Filter by miner_id if match requests have that field
# For now, count all requests (simplified)
api_calls = self.db.execute(stmt).scalar() or 0
usage_data["api_calls"] = float(api_calls)
# Calculate compute hours from match results
stmt = (
select(MatchResult)
.where(
and_(
MatchResult.miner_id == miner_id,
MatchResult.created_at >= start_date,
MatchResult.created_at <= end_date,
)
)
.where(MatchResult.eta_ms.isnot_(None))
)
results = self.db.execute(stmt).scalars().all()
# Estimate compute hours from response times (simplified)
# In production, use actual job duration
total_compute_time_ms = sum(r.eta_ms for r in results if r.eta_ms)
compute_hours = (total_compute_time_ms / 1000 / 3600) if results else 0.0
usage_data["compute_hours"] = compute_hours
# Estimate GPU hours from miner capacity and compute hours
# In production, use actual GPU utilization data
gpu_hours = compute_hours * 1.5 # Estimate 1.5 GPUs per job on average
usage_data["gpu_hours"] = gpu_hours
return usage_data
async def _send_billing_event(self, billing_event: Dict[str, Any]) -> Dict[str, Any]:
"""Send billing event to coordinator-api"""
url = f"{self.coordinator_billing_url}/api/billing/usage"
headers = {"Content-Type": "application/json"}
if self.coordinator_api_key:
headers["Authorization"] = f"Bearer {self.coordinator_api_key}"
async with httpx.AsyncClient(timeout=30.0) as client:
response = await client.post(url, json=billing_event, headers=headers)
response.raise_for_status()
return response.json()
async def get_billing_metrics(
self, tenant_id: Optional[str] = None, hours: int = 24
) -> Dict[str, Any]:
"""Get billing metrics from coordinator-api"""
url = f"{self.coordinator_billing_url}/api/billing/metrics"
params = {"hours": hours}
if tenant_id:
params["tenant_id"] = tenant_id
headers = {}
if self.coordinator_api_key:
headers["Authorization"] = f"Bearer {self.coordinator_api_key}"
async with httpx.AsyncClient(timeout=30.0) as client:
response = await client.get(url, params=params, headers=headers)
response.raise_for_status()
return response.json()
async def trigger_invoice_generation(
self, tenant_id: str, period_start: datetime, period_end: datetime
) -> Dict[str, Any]:
"""Trigger invoice generation in coordinator-api"""
url = f"{self.coordinator_billing_url}/api/billing/invoice"
payload = {
"tenant_id": tenant_id,
"period_start": period_start.isoformat(),
"period_end": period_end.isoformat(),
}
headers = {"Content-Type": "application/json"}
if self.coordinator_api_key:
headers["Authorization"] = f"Bearer {self.coordinator_api_key}"
async with httpx.AsyncClient(timeout=30.0) as client:
response = await client.post(url, json=payload, headers=headers)
response.raise_for_status()
return response.json()
class BillingIntegrationScheduler:
"""Scheduler for automated billing synchronization"""
def __init__(self, billing_integration: BillingIntegration):
self.billing_integration = billing_integration
self.logger = logging.getLogger(__name__)
self.running = False
async def start(self, sync_interval_hours: int = 1):
"""Start the billing synchronization scheduler"""
if self.running:
return
self.running = True
self.logger.info("Billing Integration scheduler started")
# Start sync loop
asyncio.create_task(self._sync_loop(sync_interval_hours))
async def stop(self):
"""Stop the billing synchronization scheduler"""
self.running = False
self.logger.info("Billing Integration scheduler stopped")
async def _sync_loop(self, interval_hours: int):
"""Background task that syncs usage data periodically"""
while self.running:
try:
await self.billing_integration.sync_all_miners_usage(
hours_back=interval_hours
)
# Wait for next sync interval
await asyncio.sleep(interval_hours * 3600)
except Exception as e:
self.logger.error(f"Error in billing sync loop: {e}")
await asyncio.sleep(300) # Retry in 5 minutes

View File

@@ -0,0 +1,405 @@
"""
SLA Metrics Collection Service for Pool-Hub
Collects and tracks SLA metrics for miners including uptime, response time, job completion rate, and capacity availability.
"""
import asyncio
import logging
from datetime import datetime, timedelta
from decimal import Decimal
from typing import Dict, List, Optional, Any
from sqlalchemy import and_, desc, func, select
from sqlalchemy.orm import Session
from ..models import (
Miner,
MinerStatus,
SLAMetric,
SLAViolation,
Feedback,
MatchRequest,
MatchResult,
CapacitySnapshot,
)
logger = logging.getLogger(__name__)
class SLACollector:
"""Service for collecting and tracking SLA metrics for miners"""
def __init__(self, db: Session):
self.db = db
self.sla_thresholds = {
"uptime_pct": 95.0,
"response_time_ms": 1000.0,
"completion_rate_pct": 90.0,
"capacity_availability_pct": 80.0,
}
async def record_sla_metric(
self,
miner_id: str,
metric_type: str,
metric_value: float,
metadata: Optional[Dict[str, str]] = None,
) -> SLAMetric:
"""Record an SLA metric for a miner"""
threshold = self.sla_thresholds.get(metric_type, 100.0)
is_violation = self._check_violation(metric_type, metric_value, threshold)
# Create SLA metric record
sla_metric = SLAMetric(
miner_id=miner_id,
metric_type=metric_type,
metric_value=metric_value,
threshold=threshold,
is_violation=is_violation,
timestamp=datetime.utcnow(),
meta_data=metadata or {},
)
self.db.add(sla_metric)
await self.db.commit()
# Create violation record if threshold breached
if is_violation:
await self._record_violation(
miner_id, metric_type, metric_value, threshold, metadata
)
logger.info(
f"Recorded SLA metric: miner={miner_id}, type={metric_type}, "
f"value={metric_value}, violation={is_violation}"
)
return sla_metric
async def collect_miner_uptime(self, miner_id: str) -> float:
"""Calculate miner uptime percentage based on heartbeat intervals"""
# Get miner status
stmt = select(MinerStatus).where(MinerStatus.miner_id == miner_id)
miner_status = (await self.db.execute(stmt)).scalar_one_or_none()
if not miner_status:
return 0.0
# Calculate uptime based on last heartbeat
if miner_status.last_heartbeat_at:
time_since_heartbeat = (
datetime.utcnow() - miner_status.last_heartbeat_at
).total_seconds()
# Consider miner down if no heartbeat for 5 minutes
if time_since_heartbeat > 300:
uptime_pct = 0.0
else:
uptime_pct = 100.0 - (time_since_heartbeat / 300.0) * 100.0
uptime_pct = max(0.0, min(100.0, uptime_pct))
else:
uptime_pct = 0.0
# Update miner status with uptime
miner_status.uptime_pct = uptime_pct
self.db.commit()
# Record SLA metric
await self.record_sla_metric(
miner_id, "uptime_pct", uptime_pct, {"method": "heartbeat_based"}
)
return uptime_pct
async def collect_response_time(self, miner_id: str) -> Optional[float]:
"""Calculate average response time for a miner from match results"""
# Get recent match results for this miner
stmt = (
select(MatchResult)
.where(MatchResult.miner_id == miner_id)
.order_by(desc(MatchResult.created_at))
.limit(100)
)
results = (await self.db.execute(stmt)).scalars().all()
if not results:
return None
# Calculate average response time (eta_ms)
response_times = [r.eta_ms for r in results if r.eta_ms is not None]
if not response_times:
return None
avg_response_time = sum(response_times) / len(response_times)
# Record SLA metric
await self.record_sla_metric(
miner_id,
"response_time_ms",
avg_response_time,
{"method": "match_results", "sample_size": len(response_times)},
)
return avg_response_time
async def collect_completion_rate(self, miner_id: str) -> Optional[float]:
"""Calculate job completion rate for a miner from feedback"""
# Get recent feedback for this miner
stmt = (
select(Feedback)
.where(Feedback.miner_id == miner_id)
.where(Feedback.created_at >= datetime.utcnow() - timedelta(days=7))
.order_by(Feedback.created_at.desc())
.limit(100)
)
feedback_records = (await self.db.execute(stmt)).scalars().all()
if not feedback_records:
return None
# Calculate completion rate (successful outcomes)
successful = sum(1 for f in feedback_records if f.outcome == "success")
completion_rate = (successful / len(feedback_records)) * 100.0
# Record SLA metric
await self.record_sla_metric(
miner_id,
"completion_rate_pct",
completion_rate,
{"method": "feedback", "sample_size": len(feedback_records)},
)
return completion_rate
async def collect_capacity_availability(self) -> Dict[str, Any]:
"""Collect capacity availability metrics across all miners"""
# Get all miner statuses
stmt = select(MinerStatus)
miner_statuses = (await self.db.execute(stmt)).scalars().all()
if not miner_statuses:
return {
"total_miners": 0,
"active_miners": 0,
"capacity_availability_pct": 0.0,
}
total_miners = len(miner_statuses)
active_miners = sum(1 for ms in miner_statuses if not ms.busy)
capacity_availability_pct = (active_miners / total_miners) * 100.0
# Record capacity snapshot
snapshot = CapacitySnapshot(
total_miners=total_miners,
active_miners=active_miners,
total_parallel_capacity=sum(
m.max_parallel for m in (await self.db.execute(select(Miner))).scalars().all()
),
total_queue_length=sum(ms.queue_len for ms in miner_statuses),
capacity_utilization_pct=100.0 - capacity_availability_pct,
forecast_capacity=total_miners, # Would be calculated from forecasting
recommended_scaling="stable",
scaling_reason="Capacity within normal range",
timestamp=datetime.utcnow(),
meta_data={"method": "real_time_collection"},
)
self.db.add(snapshot)
await self.db.commit()
logger.info(
f"Capacity snapshot: total={total_miners}, active={active_miners}, "
f"availability={capacity_availability_pct:.2f}%"
)
return {
"total_miners": total_miners,
"active_miners": active_miners,
"capacity_availability_pct": capacity_availability_pct,
}
async def collect_all_miner_metrics(self) -> Dict[str, Any]:
"""Collect all SLA metrics for all miners"""
# Get all miners
stmt = select(Miner)
miners = self.db.execute(stmt).scalars().all()
results = {
"miners_processed": 0,
"metrics_collected": [],
"violations_detected": 0,
}
for miner in miners:
try:
# Collect each metric type
uptime = await self.collect_miner_uptime(miner.miner_id)
response_time = await self.collect_response_time(miner.miner_id)
completion_rate = await self.collect_completion_rate(miner.miner_id)
results["metrics_collected"].append(
{
"miner_id": miner.miner_id,
"uptime_pct": uptime,
"response_time_ms": response_time,
"completion_rate_pct": completion_rate,
}
)
results["miners_processed"] += 1
except Exception as e:
logger.error(f"Failed to collect metrics for miner {miner.miner_id}: {e}")
# Collect capacity metrics
capacity = await self.collect_capacity_availability()
results["capacity"] = capacity
# Count violations in this collection cycle
stmt = (
select(func.count(SLAViolation.id))
.where(SLAViolation.resolved_at.is_(None))
.where(SLAViolation.created_at >= datetime.utcnow() - timedelta(hours=1))
)
results["violations_detected"] = self.db.execute(stmt).scalar() or 0
logger.info(
f"SLA collection complete: processed={results['miners_processed']}, "
f"violations={results['violations_detected']}"
)
return results
async def get_sla_metrics(
self, miner_id: Optional[str] = None, hours: int = 24
) -> List[SLAMetric]:
"""Get SLA metrics for a miner or all miners"""
cutoff = datetime.utcnow() - timedelta(hours=hours)
stmt = select(SLAMetric).where(SLAMetric.timestamp >= cutoff)
if miner_id:
stmt = stmt.where(SLAMetric.miner_id == miner_id)
stmt = stmt.order_by(desc(SLAMetric.timestamp))
return (await self.db.execute(stmt)).scalars().all()
async def get_sla_violations(
self, miner_id: Optional[str] = None, resolved: bool = False
) -> List[SLAViolation]:
"""Get SLA violations for a miner or all miners"""
stmt = select(SLAViolation)
if miner_id:
stmt = stmt.where(SLAViolation.miner_id == miner_id)
if resolved:
stmt = stmt.where(SLAViolation.resolved_at.isnot_(None))
else:
stmt = stmt.where(SLAViolation.resolved_at.is_(None))
stmt = stmt.order_by(desc(SLAViolation.created_at))
return (await self.db.execute(stmt)).scalars().all()
def _check_violation(self, metric_type: str, value: float, threshold: float) -> bool:
"""Check if a metric value violates its SLA threshold"""
if metric_type in ["uptime_pct", "completion_rate_pct", "capacity_availability_pct"]:
# Higher is better - violation if below threshold
return value < threshold
elif metric_type in ["response_time_ms"]:
# Lower is better - violation if above threshold
return value > threshold
return False
async def _record_violation(
self,
miner_id: str,
metric_type: str,
metric_value: float,
threshold: float,
metadata: Optional[Dict[str, str]] = None,
) -> SLAViolation:
"""Record an SLA violation"""
# Determine severity
if metric_type in ["uptime_pct", "completion_rate_pct"]:
severity = "critical" if metric_value < threshold * 0.8 else "high"
elif metric_type == "response_time_ms":
severity = "critical" if metric_value > threshold * 2 else "high"
else:
severity = "medium"
violation = SLAViolation(
miner_id=miner_id,
violation_type=metric_type,
severity=severity,
metric_value=metric_value,
threshold=threshold,
violation_duration_ms=None, # Will be updated when resolved
created_at=datetime.utcnow(),
meta_data=metadata or {},
)
self.db.add(violation)
await self.db.commit()
logger.warning(
f"SLA violation recorded: miner={miner_id}, type={metric_type}, "
f"severity={severity}, value={metric_value}, threshold={threshold}"
)
return violation
class SLACollectorScheduler:
"""Scheduler for automated SLA metric collection"""
def __init__(self, sla_collector: SLACollector):
self.sla_collector = sla_collector
self.logger = logging.getLogger(__name__)
self.running = False
async def start(self, collection_interval_seconds: int = 300):
"""Start the SLA collection scheduler"""
if self.running:
return
self.running = True
self.logger.info("SLA Collector scheduler started")
# Start collection loop
asyncio.create_task(self._collection_loop(collection_interval_seconds))
async def stop(self):
"""Stop the SLA collection scheduler"""
self.running = False
self.logger.info("SLA Collector scheduler stopped")
async def _collection_loop(self, interval_seconds: int):
"""Background task that collects SLA metrics periodically"""
while self.running:
try:
await self.sla_collector.collect_all_miner_metrics()
# Wait for next collection interval
await asyncio.sleep(interval_seconds)
except Exception as e:
self.logger.error(f"Error in SLA collection loop: {e}")
await asyncio.sleep(60) # Retry in 1 minute

View File

@@ -32,9 +32,11 @@ class Settings(BaseSettings):
postgres_dsn: str = Field(default="postgresql+asyncpg://poolhub:poolhub@127.0.0.1:5432/aitbc")
postgres_pool_min: int = Field(default=1)
postgres_pool_max: int = Field(default=10)
test_postgres_dsn: str = Field(default="postgresql+asyncpg://poolhub:poolhub@127.0.0.1:5432/aitbc_test")
redis_url: str = Field(default="redis://127.0.0.1:6379/4")
redis_max_connections: int = Field(default=32)
test_redis_url: str = Field(default="redis://127.0.0.1:6379/4")
session_ttl_seconds: int = Field(default=60)
heartbeat_grace_seconds: int = Field(default=120)
@@ -45,6 +47,30 @@ class Settings(BaseSettings):
prometheus_namespace: str = Field(default="poolhub")
# Coordinator-API Billing Integration
coordinator_billing_url: str = Field(default="http://localhost:8011")
coordinator_api_key: str | None = Field(default=None)
# SLA Configuration
sla_thresholds: Dict[str, float] = Field(
default_factory=lambda: {
"uptime_pct": 95.0,
"response_time_ms": 1000.0,
"completion_rate_pct": 90.0,
"capacity_availability_pct": 80.0,
}
)
# Capacity Planning Configuration
capacity_forecast_hours: int = Field(default=168)
capacity_alert_threshold_pct: float = Field(default=80.0)
# Billing Sync Configuration
billing_sync_interval_hours: int = Field(default=1)
# SLA Collection Configuration
sla_collection_interval_seconds: int = Field(default=300)
def asgi_kwargs(self) -> Dict[str, Any]:
return {
"title": self.app_name,

View File

@@ -6,10 +6,14 @@ from pathlib import Path
import pytest
import pytest_asyncio
from dotenv import load_dotenv
from redis.asyncio import Redis
from sqlalchemy.ext.asyncio import AsyncEngine, AsyncSession, async_sessionmaker, create_async_engine
# Load .env file
BASE_DIR = Path(__file__).resolve().parents[2]
load_dotenv(BASE_DIR / ".env")
POOLHUB_SRC = BASE_DIR / "pool-hub" / "src"
if str(POOLHUB_SRC) not in sys.path:
sys.path.insert(0, str(POOLHUB_SRC))

View File

@@ -0,0 +1,192 @@
"""
Tests for Billing Integration Service
"""
import pytest
from datetime import datetime, timedelta
from decimal import Decimal
from unittest.mock import AsyncMock, patch
from sqlalchemy.orm import Session
from poolhub.models import Miner, MatchRequest, MatchResult
from poolhub.services.billing_integration import BillingIntegration
@pytest.fixture
def billing_integration(db_session: Session) -> BillingIntegration:
"""Create billing integration fixture"""
return BillingIntegration(db_session)
@pytest.fixture
def sample_miner(db_session: Session) -> Miner:
"""Create sample miner fixture"""
miner = Miner(
miner_id="test_miner_001",
api_key_hash="hash123",
addr="127.0.0.1:8080",
proto="http",
gpu_vram_gb=24.0,
gpu_name="RTX 4090",
cpu_cores=16,
ram_gb=64.0,
max_parallel=4,
base_price=0.50,
)
db_session.add(miner)
db_session.commit()
return miner
@pytest.mark.asyncio
async def test_record_usage(billing_integration: BillingIntegration):
"""Test recording usage data"""
# Mock the HTTP client
with patch("poolhub.services.billing_integration.httpx.AsyncClient") as mock_client:
mock_response = AsyncMock()
mock_response.json.return_value = {"status": "success", "id": "usage_123"}
mock_response.raise_for_status = AsyncMock()
mock_client.return_value.__aenter__.return_value.post = AsyncMock(return_value=mock_response)
result = await billing_integration.record_usage(
tenant_id="tenant_001",
resource_type="gpu_hours",
quantity=Decimal("10.5"),
unit_price=Decimal("0.50"),
job_id="job_123",
)
assert result["status"] == "success"
@pytest.mark.asyncio
async def test_record_usage_with_fallback_pricing(billing_integration: BillingIntegration):
"""Test recording usage with fallback pricing when unit_price not provided"""
with patch("poolhub.services.billing_integration.httpx.AsyncClient") as mock_client:
mock_response = AsyncMock()
mock_response.json.return_value = {"status": "success", "id": "usage_123"}
mock_response.raise_for_status = AsyncMock()
mock_client.return_value.__aenter__.return_value.post = AsyncMock(return_value=mock_response)
result = await billing_integration.record_usage(
tenant_id="tenant_001",
resource_type="gpu_hours",
quantity=Decimal("10.5"),
# unit_price not provided
)
assert result["status"] == "success"
@pytest.mark.asyncio
async def test_sync_miner_usage(billing_integration: BillingIntegration, sample_miner: Miner):
"""Test syncing usage for a specific miner"""
end_date = datetime.utcnow()
start_date = end_date - timedelta(hours=24)
with patch("poolhub.services.billing_integration.httpx.AsyncClient") as mock_client:
mock_response = AsyncMock()
mock_response.json.return_value = {"status": "success", "id": "usage_123"}
mock_response.raise_for_status = AsyncMock()
mock_client.return_value.__aenter__.return_value.post = AsyncMock(return_value=mock_response)
result = await billing_integration.sync_miner_usage(
miner_id=sample_miner.miner_id,
start_date=start_date,
end_date=end_date,
)
assert result["miner_id"] == sample_miner.miner_id
assert result["tenant_id"] == sample_miner.miner_id
assert "usage_records" in result
@pytest.mark.asyncio
async def test_sync_all_miners_usage(billing_integration: BillingIntegration, sample_miner: Miner):
"""Test syncing usage for all miners"""
with patch("poolhub.services.billing_integration.httpx.AsyncClient") as mock_client:
mock_response = AsyncMock()
mock_response.json.return_value = {"status": "success", "id": "usage_123"}
mock_response.raise_for_status = AsyncMock()
mock_client.return_value.__aenter__.return_value.post = AsyncMock(return_value=mock_response)
result = await billing_integration.sync_all_miners_usage(hours_back=24)
assert result["miners_processed"] >= 1
assert "total_usage_records" in result
def test_collect_miner_usage(billing_integration: BillingIntegration, sample_miner: Miner):
"""Test collecting usage data for a miner"""
end_date = datetime.utcnow()
start_date = end_date - timedelta(hours=24)
usage_data = billing_integration.db.run_sync(
lambda sess: billing_integration._collect_miner_usage(
sample_miner.miner_id, start_date, end_date
)
)
assert "gpu_hours" in usage_data
assert "api_calls" in usage_data
assert "compute_hours" in usage_data
@pytest.mark.asyncio
async def test_get_billing_metrics(billing_integration: BillingIntegration):
"""Test getting billing metrics from coordinator-api"""
with patch("poolhub.services.billing_integration.httpx.AsyncClient") as mock_client:
mock_response = AsyncMock()
mock_response.json.return_value = {
"totals": {"cost": 100.0, "records": 50},
"by_resource": {"gpu_hours": {"cost": 50.0}},
}
mock_response.raise_for_status = AsyncMock()
mock_client.return_value.__aenter__.return_value.get = AsyncMock(return_value=mock_response)
metrics = await billing_integration.get_billing_metrics(hours=24)
assert "totals" in metrics
@pytest.mark.asyncio
async def test_trigger_invoice_generation(billing_integration: BillingIntegration):
"""Test triggering invoice generation"""
with patch("poolhub.services.billing_integration.httpx.AsyncClient") as mock_client:
mock_response = AsyncMock()
mock_response.json.return_value = {
"invoice_number": "INV-001",
"status": "draft",
"total_amount": 100.0,
}
mock_response.raise_for_status = AsyncMock()
mock_client.return_value.__aenter__.return_value.post = AsyncMock(return_value=mock_response)
end_date = datetime.utcnow()
start_date = end_date - timedelta(days=30)
result = await billing_integration.trigger_invoice_generation(
tenant_id="tenant_001",
period_start=start_date,
period_end=end_date,
)
assert result["invoice_number"] == "INV-001"
def test_resource_type_mapping(billing_integration: BillingIntegration):
"""Test resource type mapping"""
assert "gpu_hours" in billing_integration.resource_type_mapping
assert "storage_gb" in billing_integration.resource_type_mapping
def test_fallback_pricing(billing_integration: BillingIntegration):
"""Test fallback pricing configuration"""
assert "gpu_hours" in billing_integration.fallback_pricing
assert billing_integration.fallback_pricing["gpu_hours"]["unit_price"] == Decimal("0.50")

View File

@@ -0,0 +1,212 @@
"""
Integration Tests for Pool-Hub with Coordinator-API
Tests the integration between pool-hub and coordinator-api's billing system.
"""
import pytest
from datetime import datetime, timedelta
from decimal import Decimal
from sqlalchemy.orm import Session
from poolhub.models import Miner, MinerStatus, SLAMetric, CapacitySnapshot
from poolhub.services.sla_collector import SLACollector
from poolhub.services.billing_integration import BillingIntegration
@pytest.fixture
def sla_collector(db_session: Session) -> SLACollector:
"""Create SLA collector fixture"""
return SLACollector(db_session)
@pytest.fixture
def billing_integration(db_session: Session) -> BillingIntegration:
"""Create billing integration fixture"""
return BillingIntegration(db_session)
@pytest.fixture
def sample_miner(db_session: Session) -> Miner:
"""Create sample miner fixture"""
miner = Miner(
miner_id="test_miner_001",
api_key_hash="hash123",
addr="127.0.0.1:8080",
proto="http",
gpu_vram_gb=24.0,
gpu_name="RTX 4090",
cpu_cores=16,
ram_gb=64.0,
max_parallel=4,
base_price=0.50,
)
db_session.add(miner)
db_session.commit()
return miner
def test_end_to_end_sla_to_billing_workflow(
sla_collector: SLACollector,
billing_integration: BillingIntegration,
sample_miner: Miner,
):
"""Test end-to-end workflow from SLA collection to billing"""
# Step 1: Collect SLA metrics
sla_collector.db.run_sync(
lambda sess: sla_collector.record_sla_metric(
miner_id=sample_miner.miner_id,
metric_type="uptime_pct",
metric_value=98.5,
)
)
# Step 2: Verify metric was recorded
metrics = sla_collector.db.run_sync(
lambda sess: sla_collector.get_sla_metrics(
miner_id=sample_miner.miner_id, hours=1
)
)
assert len(metrics) > 0
# Step 3: Collect usage data for billing
end_date = datetime.utcnow()
start_date = end_date - timedelta(hours=1)
usage_data = sla_collector.db.run_sync(
lambda sess: billing_integration._collect_miner_usage(
sample_miner.miner_id, start_date, end_date
)
)
assert "gpu_hours" in usage_data
assert "api_calls" in usage_data
def test_capacity_snapshot_creation(sla_collector: SLACollector, sample_miner: Miner):
"""Test capacity snapshot creation for capacity planning"""
# Create capacity snapshot
capacity = sla_collector.db.run_sync(
lambda sess: sla_collector.collect_capacity_availability()
)
assert capacity["total_miners"] >= 1
assert "active_miners" in capacity
assert "capacity_availability_pct" in capacity
# Verify snapshot was stored in database
snapshots = sla_collector.db.run_sync(
lambda sess: sla_collector.db.query(CapacitySnapshot)
.order_by(CapacitySnapshot.timestamp.desc())
.limit(1)
.all()
)
assert len(snapshots) > 0
def test_sla_violation_billing_correlation(
sla_collector: SLACollector,
billing_integration: BillingIntegration,
sample_miner: Miner,
):
"""Test correlation between SLA violations and billing"""
# Record a violation
sla_collector.db.run_sync(
lambda sess: sla_collector.record_sla_metric(
miner_id=sample_miner.miner_id,
metric_type="uptime_pct",
metric_value=80.0, # Below threshold
)
)
# Check violation was recorded
violations = sla_collector.db.run_sync(
lambda sess: sla_collector.get_sla_violations(
miner_id=sample_miner.miner_id, resolved=False
)
)
assert len(violations) > 0
# Usage should still be recorded even with violations
end_date = datetime.utcnow()
start_date = end_date - timedelta(hours=1)
usage_data = sla_collector.db.run_sync(
lambda sess: billing_integration._collect_miner_usage(
sample_miner.miner_id, start_date, end_date
)
)
assert usage_data is not None
def test_multi_miner_sla_collection(sla_collector: SLACollector, db_session: Session):
"""Test SLA collection across multiple miners"""
# Create multiple miners
miners = []
for i in range(3):
miner = Miner(
miner_id=f"test_miner_{i:03d}",
api_key_hash=f"hash{i}",
addr=f"127.0.0.{i}:8080",
proto="http",
gpu_vram_gb=24.0,
gpu_name="RTX 4090",
cpu_cores=16,
ram_gb=64.0,
max_parallel=4,
base_price=0.50,
)
db_session.add(miner)
miners.append(miner)
db_session.commit()
# Collect metrics for all miners
results = sla_collector.db.run_sync(
lambda sess: sla_collector.collect_all_miner_metrics()
)
assert results["miners_processed"] >= 3
def test_billing_sync_with_coordinator_api(
billing_integration: BillingIntegration,
sample_miner: Miner,
):
"""Test billing sync with coordinator-api (mocked)"""
from unittest.mock import AsyncMock, patch
end_date = datetime.utcnow()
start_date = end_date - timedelta(hours=1)
with patch("poolhub.services.billing_integration.httpx.AsyncClient") as mock_client:
mock_response = AsyncMock()
mock_response.json.return_value = {"status": "success", "id": "usage_123"}
mock_response.raise_for_status = AsyncMock()
mock_client.return_value.__aenter__.return_value.post = AsyncMock(
return_value=mock_response
)
result = billing_integration.db.run_sync(
lambda sess: billing_integration.sync_miner_usage(
miner_id=sample_miner.miner_id, start_date=start_date, end_date=end_date
)
)
assert result["miner_id"] == sample_miner.miner_id
assert result["usage_records"] >= 0
def test_sla_threshold_configuration(sla_collector: SLACollector):
"""Test SLA threshold configuration"""
# Verify default thresholds
assert sla_collector.sla_thresholds["uptime_pct"] == 95.0
assert sla_collector.sla_thresholds["response_time_ms"] == 1000.0
assert sla_collector.sla_thresholds["completion_rate_pct"] == 90.0
assert sla_collector.sla_thresholds["capacity_availability_pct"] == 80.0
def test_capacity_utilization_calculation(sla_collector: SLACollector, sample_miner: Miner):
"""Test capacity utilization calculation"""
capacity = sla_collector.db.run_sync(
lambda sess: sla_collector.collect_capacity_availability()
)
# Verify utilization is between 0 and 100
assert 0 <= capacity["capacity_availability_pct"] <= 100

View File

@@ -0,0 +1,186 @@
"""
Tests for SLA Collector Service
"""
import pytest
from datetime import datetime, timedelta
from decimal import Decimal
from sqlalchemy.orm import Session
from poolhub.models import Miner, MinerStatus, SLAMetric, SLAViolation, Feedback, MatchResult
from poolhub.services.sla_collector import SLACollector
@pytest.fixture
def sla_collector(db_session: Session) -> SLACollector:
"""Create SLA collector fixture"""
return SLACollector(db_session)
@pytest.fixture
def sample_miner(db_session: Session) -> Miner:
"""Create sample miner fixture"""
miner = Miner(
miner_id="test_miner_001",
api_key_hash="hash123",
addr="127.0.0.1:8080",
proto="http",
gpu_vram_gb=24.0,
gpu_name="RTX 4090",
cpu_cores=16,
ram_gb=64.0,
max_parallel=4,
base_price=0.50,
)
db_session.add(miner)
db_session.commit()
return miner
@pytest.fixture
def sample_miner_status(db_session: Session, sample_miner: Miner) -> MinerStatus:
"""Create sample miner status fixture"""
status = MinerStatus(
miner_id=sample_miner.miner_id,
queue_len=2,
busy=False,
avg_latency_ms=150,
temp_c=65,
mem_free_gb=32.0,
last_heartbeat_at=datetime.utcnow(),
)
db_session.add(status)
db_session.commit()
return status
@pytest.mark.asyncio
async def test_record_sla_metric(sla_collector: SLACollector, sample_miner: Miner):
"""Test recording an SLA metric"""
metric = await sla_collector.record_sla_metric(
miner_id=sample_miner.miner_id,
metric_type="uptime_pct",
metric_value=98.5,
metadata={"test": "true"},
)
assert metric.miner_id == sample_miner.miner_id
assert metric.metric_type == "uptime_pct"
assert metric.metric_value == 98.5
assert metric.is_violation == False
@pytest.mark.asyncio
async def test_record_sla_metric_violation(sla_collector: SLACollector, sample_miner: Miner):
"""Test recording an SLA metric that violates threshold"""
metric = await sla_collector.record_sla_metric(
miner_id=sample_miner.miner_id,
metric_type="uptime_pct",
metric_value=80.0, # Below threshold of 95%
metadata={"test": "true"},
)
assert metric.is_violation == True
# Check violation was recorded
violations = await sla_collector.get_sla_violations(
miner_id=sample_miner.miner_id, resolved=False
)
assert len(violations) > 0
assert violations[0].violation_type == "uptime_pct"
@pytest.mark.asyncio
async def test_collect_miner_uptime(sla_collector: SLACollector, sample_miner_status: MinerStatus):
"""Test collecting miner uptime"""
uptime = await sla_collector.collect_miner_uptime(sample_miner_status.miner_id)
assert uptime is not None
assert 0 <= uptime <= 100
@pytest.mark.asyncio
async def test_collect_response_time_no_results(sla_collector: SLACollector, sample_miner: Miner):
"""Test collecting response time when no match results exist"""
response_time = await sla_collector.collect_response_time(sample_miner.miner_id)
assert response_time is None
@pytest.mark.asyncio
async def test_collect_completion_rate_no_feedback(sla_collector: SLACollector, sample_miner: Miner):
"""Test collecting completion rate when no feedback exists"""
completion_rate = await sla_collector.collect_completion_rate(sample_miner.miner_id)
assert completion_rate is None
@pytest.mark.asyncio
async def test_collect_capacity_availability(sla_collector: SLACollector):
"""Test collecting capacity availability"""
capacity = await sla_collector.collect_capacity_availability()
assert "total_miners" in capacity
assert "active_miners" in capacity
assert "capacity_availability_pct" in capacity
@pytest.mark.asyncio
async def test_get_sla_metrics(sla_collector: SLACollector, sample_miner: Miner):
"""Test getting SLA metrics"""
# Record a metric first
await sla_collector.record_sla_metric(
miner_id=sample_miner.miner_id,
metric_type="uptime_pct",
metric_value=98.5,
)
metrics = await sla_collector.get_sla_metrics(
miner_id=sample_miner.miner_id, hours=24
)
assert len(metrics) > 0
assert metrics[0].miner_id == sample_miner.miner_id
@pytest.mark.asyncio
async def test_get_sla_violations(sla_collector: SLACollector, sample_miner: Miner):
"""Test getting SLA violations"""
# Record a violation
await sla_collector.record_sla_metric(
miner_id=sample_miner.miner_id,
metric_type="uptime_pct",
metric_value=80.0, # Below threshold
)
violations = await sla_collector.get_sla_violations(
miner_id=sample_miner.miner_id, resolved=False
)
assert len(violations) > 0
def test_check_violation_uptime_below_threshold(sla_collector: SLACollector):
"""Test violation check for uptime below threshold"""
is_violation = sla_collector._check_violation("uptime_pct", 90.0, 95.0)
assert is_violation == True
def test_check_violation_uptime_above_threshold(sla_collector: SLACollector):
"""Test violation check for uptime above threshold"""
is_violation = sla_collector._check_violation("uptime_pct", 98.0, 95.0)
assert is_violation == False
@pytest.mark.asyncio
async def test_check_violation_response_time_above_threshold(sla_collector: SLACollector):
"""Test violation check for response time above threshold"""
is_violation = sla_collector._check_violation("response_time_ms", 2000.0, 1000.0)
assert is_violation == True
@pytest.mark.asyncio
async def test_check_violation_response_time_below_threshold(sla_collector: SLACollector):
"""Test violation check for response time below threshold"""
is_violation = sla_collector._check_violation("response_time_ms", 500.0, 1000.0)
assert is_violation == False

View File

@@ -0,0 +1,216 @@
"""
Tests for SLA API Endpoints
"""
import pytest
from datetime import datetime, timedelta
from decimal import Decimal
from fastapi.testclient import TestClient
from sqlalchemy.orm import Session
from poolhub.models import Miner, MinerStatus, SLAMetric
from poolhub.app.routers.sla import router
from poolhub.database import get_db
@pytest.fixture
def test_client(db_session: Session):
"""Create test client fixture"""
from fastapi import FastAPI
app = FastAPI()
app.include_router(router)
# Override database dependency
def override_get_db():
try:
yield db_session
finally:
pass
app.dependency_overrides[get_db] = override_get_db
return TestClient(app)
@pytest.fixture
def sample_miner(db_session: Session) -> Miner:
"""Create sample miner fixture"""
miner = Miner(
miner_id="test_miner_001",
api_key_hash="hash123",
addr="127.0.0.1:8080",
proto="http",
gpu_vram_gb=24.0,
gpu_name="RTX 4090",
cpu_cores=16,
ram_gb=64.0,
max_parallel=4,
base_price=0.50,
)
db_session.add(miner)
db_session.commit()
return miner
@pytest.fixture
def sample_sla_metric(db_session: Session, sample_miner: Miner) -> SLAMetric:
"""Create sample SLA metric fixture"""
from uuid import uuid4
metric = SLAMetric(
id=uuid4(),
miner_id=sample_miner.miner_id,
metric_type="uptime_pct",
metric_value=98.5,
threshold=95.0,
is_violation=False,
timestamp=datetime.utcnow(),
metadata={"test": "true"},
)
db_session.add(metric)
db_session.commit()
return metric
def test_get_miner_sla_metrics(test_client: TestClient, sample_sla_metric: SLAMetric):
"""Test getting SLA metrics for a specific miner"""
response = test_client.get(f"/sla/metrics/{sample_sla_metric.miner_id}?hours=24")
assert response.status_code == 200
data = response.json()
assert len(data) > 0
assert data[0]["miner_id"] == sample_sla_metric.miner_id
def test_get_all_sla_metrics(test_client: TestClient, sample_sla_metric: SLAMetric):
"""Test getting SLA metrics across all miners"""
response = test_client.get("/sla/metrics?hours=24")
assert response.status_code == 200
data = response.json()
assert len(data) > 0
def test_get_sla_violations(test_client: TestClient, sample_miner: Miner):
"""Test getting SLA violations"""
response = test_client.get("/sla/violations?resolved=false")
assert response.status_code == 200
data = response.json()
assert isinstance(data, list)
def test_collect_sla_metrics(test_client: TestClient):
"""Test triggering SLA metrics collection"""
response = test_client.post("/sla/metrics/collect")
assert response.status_code == 200
data = response.json()
assert "miners_processed" in data
def test_get_capacity_snapshots(test_client: TestClient):
"""Test getting capacity planning snapshots"""
response = test_client.get("/sla/capacity/snapshots?hours=24")
assert response.status_code == 200
data = response.json()
assert isinstance(data, list)
def test_get_capacity_forecast(test_client: TestClient):
"""Test getting capacity forecast"""
response = test_client.get("/sla/capacity/forecast?hours_ahead=168")
assert response.status_code == 200
data = response.json()
assert "forecast_horizon_hours" in data
assert "current_capacity" in data
def test_get_scaling_recommendations(test_client: TestClient):
"""Test getting scaling recommendations"""
response = test_client.get("/sla/capacity/recommendations")
assert response.status_code == 200
data = response.json()
assert "current_state" in data
assert "recommendations" in data
def test_configure_capacity_alerts(test_client: TestClient):
"""Test configuring capacity alerts"""
alert_config = {
"threshold_pct": 80.0,
"notification_email": "admin@example.com",
}
response = test_client.post("/sla/capacity/alerts/configure", json=alert_config)
assert response.status_code == 200
data = response.json()
assert data["status"] == "configured"
def test_get_billing_usage(test_client: TestClient):
"""Test getting billing usage data"""
response = test_client.get("/sla/billing/usage?hours=24")
# This may fail if coordinator-api is not available
# For now, we expect either 200 or 500
assert response.status_code in [200, 500]
def test_sync_billing_usage(test_client: TestClient):
"""Test triggering billing sync"""
request_data = {
"hours_back": 24,
}
response = test_client.post("/sla/billing/sync", json=request_data)
# This may fail if coordinator-api is not available
# For now, we expect either 200 or 500
assert response.status_code in [200, 500]
def test_record_usage(test_client: TestClient):
"""Test recording a single usage event"""
request_data = {
"tenant_id": "tenant_001",
"resource_type": "gpu_hours",
"quantity": 10.5,
"unit_price": 0.50,
"job_id": "job_123",
}
response = test_client.post("/sla/billing/usage/record", json=request_data)
# This may fail if coordinator-api is not available
# For now, we expect either 200 or 500
assert response.status_code in [200, 500]
def test_generate_invoice(test_client: TestClient):
"""Test triggering invoice generation"""
end_date = datetime.utcnow()
start_date = end_date - timedelta(days=30)
request_data = {
"tenant_id": "tenant_001",
"period_start": start_date.isoformat(),
"period_end": end_date.isoformat(),
}
response = test_client.post("/sla/billing/invoice/generate", json=request_data)
# This may fail if coordinator-api is not available
# For now, we expect either 200 or 500
assert response.status_code in [200, 500]
def test_get_sla_status(test_client: TestClient):
"""Test getting overall SLA status"""
response = test_client.get("/sla/status")
assert response.status_code == 200
data = response.json()
assert "status" in data
assert "active_violations" in data
assert "timestamp" in data