Skip to content
Provider Network

Contribute Compute. Earn ANM from Useful AI Work.

AICF providers supply real GPU and CPU capacity for inference, embeddings, and agent workloads. Routing preference is driven by benchmark score, uptime, completion quality, and reputation.

Hardware Support
Class GPU Targets VRAM Expected Workloads
Entry RTX 3060 / A2000 12 GB+ Inference + embeddings
Recommended RTX 4090 / L40S 24 GB+ High-throughput inference + agents
Pro A100 / H100 / MI300 80 GB+ Enterprise latency + training jobs

Earnings Model

Jobs reserve ANM budget in escrow. Upon accepted receipts, ANM is distributed to providers with transparent provider and treasury share breakdown.

  • Higher sustained throughput increases routing volume
  • Lower failure rates increase effective utilization
  • Premium queues reward high-end benchmark classes

Trust and Reputation

Providers are ranked by uptime, completion quality, benchmark consistency, and dispute history. Stake and reputation influence assignment confidence.

  • Heartbeat compliance and job response windows
  • Challenge/dispute outcomes and slashing events
  • Node benchmark verification and hardware integrity

Benchmark First

Run benchmark mode before joining queues. AICF captures detected GPUs, memory, and throughput metrics used by scheduler policy.

aicf-provider-worker benchmark --config provider.config.json
aicf-provider-worker start --config provider.config.json
Quickstart Flow
  1. Download a worker bundle for Windows, Linux, or Python source.
  2. Generate `provider.config.json` and bind your payout wallet.
  3. Run benchmark mode and verify detected GPUs + throughput score.
  4. Start worker daemon with heartbeat + logs enabled.
  5. Accept jobs, submit receipts, and track rewards in dashboard.
Live Network Snapshot

Active Providers

--

Provider Nodes

--

Queue Units

--

Settlements

--

Loading live AICF metrics...