Implement RECEIPT_CLAIM transaction type
Some checks failed
Blockchain Synchronization Verification / sync-verification (push) Successful in 4s
Documentation Validation / validate-docs (push) Successful in 12s
Documentation Validation / validate-policies-strict (push) Successful in 3s
Integration Tests / test-service-integration (push) Failing after 12s
Multi-Node Blockchain Health Monitoring / health-check (push) Successful in 3s
P2P Network Verification / p2p-verification (push) Successful in 2s
Python Tests / test-python (push) Successful in 10s
Security Scanning / security-scan (push) Successful in 31s

- Add status fields to Receipt model (status, claimed_at, claimed_by)
- Add RECEIPT_CLAIM handling to state_transition.py with validation and reward minting
- Add type field to Transaction model for reliable transaction type storage
- Update router to use TransactionRequest model to preserve type field
- Update poa.py to extract type from mempool transaction content and store only original payload
- Add RECEIPT_CLAIM to GasType enum with gas schedule
This commit is contained in:
aitbc
2026-04-22 13:35:31 +02:00
parent a6a840a930
commit f36fd45d28
40 changed files with 1194 additions and 349 deletions

View File

@@ -1,7 +1,7 @@
---
description: Atomic AITBC AI job operations with deterministic monitoring and optimization
title: aitbc-ai-operator
version: 1.0
version: 1.1
---
# AITBC AI Operator
@@ -17,15 +17,21 @@ Trigger when user requests AI operations: job submission, status monitoring, res
{
"operation": "submit|status|results|list|optimize|cancel",
"wallet": "string (for submit/optimize)",
"job_type": "inference|parallel|ensemble|multimodal|resource-allocation|performance-tuning|economic-modeling|marketplace-strategy|investment-strategy",
"job_type": "inference|training|multimodal|ollama|streaming|monitoring",
"prompt": "string (for submit)",
"payment": "number (for submit)",
"job_id": "string (for status/results/cancel)",
"agent_id": "string (for optimize)",
"cpu": "number (for optimize)",
"memory": "number (for optimize)",
"gpu": "number (for optimize)",
"duration": "number (for optimize)",
"limit": "number (optional for list)"
"limit": "number (optional for list)",
"model": "string (optional for ollama jobs, e.g., llama2, mistral)",
"provider_id": "string (optional for GPU provider selection)",
"endpoint": "string (optional for custom Ollama endpoint)",
"batch_file": "string (optional for batch operations)",
"parallel": "number (optional for parallel job count)"
}
```
@@ -91,9 +97,13 @@ Trigger when user requests AI operations: job submission, status monitoring, res
## Environment Assumptions
- AITBC CLI accessible at `/opt/aitbc/aitbc-cli`
- AI services operational (Ollama, exchange, coordinator)
- Ollama endpoint accessible at `http://localhost:11434` or custom endpoint
- GPU provider marketplace operational for resource allocation
- Sufficient wallet balance for job payments
- Resource allocation system operational
- Job queue processing functional
- Ollama models available: llama2, mistral, codellama, etc.
- GPU providers registered with unique p2p_node_id for P2P connectivity
## Error Handling
- Insufficient balance → Return error with required amount