Build decentralized AI apps on a distributed GPU network
Integrate via SDK & contracts. AIL2 handles routing, scheduling, receipts, settlement, and miner incentives—so you ship faster across Ethereum, BNB Chain, XLayer, Base, Mantle, and GIWA.
Router & Scheduler
Multi-objective routing (latency / reliability / cost / risk)
Receipts & Settlement
Auditable receipts + batch settlement + auto split
Cross-chain Native
One integration, serve users across chains
A protocol-grade core stack for decentralized AI.
You Bring
- Model container (Docker)
- Pricing Strategy
- Revenue Split Policy
AIL2 Provides
- Global GPU network
- Intelligent Routing
- Anti-fraud & Settlement
You Get
- Infinite Scalability
- Auditable Accounting
- Instant Incentives
Core Protocol Capabilities
A modular stack designed for high-throughput AI inference.
Router & Scheduler
Predict queue time, score nodes, retry/fallback automatically across the entire decentralized mesh.
Model Containers
Versioning, canary rollout, rollback. Miners pull your custom images and run them in secure isolated environments.
Usage Receipts
Signed receipts with token counts, GPU seconds, and fees. Merkle batch proofs ensure absolute auditability.
AIL2 Settlement
Batch finalize inference proofs with a dispute window and automatic revenue distribution to token holders.
SLA & Reputation
Real-time monitoring of node success rates and p95 latency. Reputation scores affect routing priority.
Security & Anti-fraud
Risk-weighted rewards and automated slashing for forged receipts or duplicated work detection.
From Request to Settlement
The lifecycle of a decentralized inference request.
Request initiated via SDK/Gateway with project credentials.
Router validates authorization, rate-limits, and checks credit balance.
Scheduler selects optimal GPU node based on latency and reputation.
Container executes inference and returns signed result + receipt.
Receipt is batched into Merkle tree for Layer 2 settlement and split payment.
Everything is measurable, auditable, and programmable.
Ship in 5 Minutes
Start building with a single API call.
Create Project
Get your unique project_id in the console.
Generate API Key
Securely sign your inference requests.
Register Model
Define requirements, pricing, and revenue split.
curl -X POST "https://api.ail2.network/v1/infer" \
-H "Authorization: Bearer $AIL2_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"project_id": "p_123",
"model_id": "nebula-ai",
"version": "1.2.3",
"input": {"prompt": "Explain zk-proof in 3 bullets"},
"options": {"timeout_ms": 12000}
}'{
"output": {"text": "..."},
"receipt": {
"receipt_id": "rcpt_9f3a...",
"tokens": 1240,
"gpu_sec": 0.84,
"fee": "0.024",
"node": "gpu-kr-0281"
}
}Core Capabilities Matrix
| Capability | What you get | Notes |
|---|---|---|
| Inference API | Sync/Async endpoints for all models | HTTP/WS Support |
| Streaming | Token-by-token or event-based streaming | p99 < 50ms TTFT |
| Model Registry | Versions, canary, and instant rollbacks | Integrated CI/CD |
| Receipts | Signed Merkle-provable usage receipts | Audit-ready anytime |
| Settlement | Layer 2 automated split execution | Settles on Mainnet |
| Policies | Custom rate-limits and region geofencing | Configurable per key |
Frequently Asked Questions
Miners must provide enterprise-grade GPUs (Nvidia A100, H100, or RTX 4090) with high-bandwidth interconnects to meet the AIL2 Core performance SLA.
AIL2 supports TEE-based (Trusted Execution Environments) containers for models that require weights or inputs to remain encrypted throughout the inference cycle.
Yes. Developers can define custom pricing per token, per GPU second, or per inference task directly in the model registration contract.