Move gpu_acceleration to dev directory
Some checks failed
Documentation Validation / validate-docs (push) Has been cancelled

- Move GPU acceleration code from root to dev/gpu_acceleration/
- No active imports found in production apps, CLI, or scripts
- Contains GPU provider implementations, CUDA kernels, and research code
- Belongs in dev/ as development/research code, not production
This commit is contained in:
aitbc
2026-04-16 22:51:29 +02:00
parent a536b731fd
commit 2246f92cd7
31 changed files with 0 additions and 0 deletions

View File

@@ -0,0 +1,31 @@
# GPU Acceleration Benchmarks
Benchmark snapshots for common GPUs in the AITBC stack. Values are indicative and should be validated on target hardware.
## Throughput (TFLOPS, peak theoretical)
| GPU | FP32 TFLOPS | BF16/FP16 TFLOPS | Notes |
| --- | --- | --- | --- |
| NVIDIA H100 SXM | ~67 | ~989 (Tensor Core) | Best for large batch training/inference |
| NVIDIA A100 80GB | ~19.5 | ~312 (Tensor Core) | Strong balance of memory and throughput |
| RTX 4090 | ~82 | ~165 (Tensor Core) | High single-node perf; workstation-friendly |
| RTX 3080 | ~30 | ~59 (Tensor Core) | Cost-effective mid-tier |
## Latency (ms) — Transformer Inference (BERT-base, sequence=128)
| GPU | Batch 1 | Batch 8 | Notes |
| --- | --- | --- | --- |
| H100 | ~1.5 ms | ~2.3 ms | Best-in-class latency |
| A100 80GB | ~2.1 ms | ~3.0 ms | Stable at scale |
| RTX 4090 | ~2.5 ms | ~3.5 ms | Strong price/perf |
| RTX 3080 | ~3.4 ms | ~4.8 ms | Budget-friendly |
## Recommendations
- Prefer **H100/A100** for multi-tenant or high-throughput workloads.
- Use **RTX 4090** for cost-efficient single-node inference and fine-tuning.
- Tune batch size to balance latency vs. throughput; start with batch 816 for inference.
- Enable mixed precision (BF16/FP16) when supported to maximize Tensor Core throughput.
## Validation Checklist
- Run `nvidia-smi` under sustained load to confirm power/thermal headroom.
- Pin CUDA/cuDNN versions to tested combos (e.g., CUDA 12.x for H100, 11.8+ for A100/4090).
- Verify kernel autotuning (e.g., `torch.backends.cudnn.benchmark = True`) for steady workloads.
- Re-benchmark after driver updates or major framework upgrades.