Build decentralized AI apps on a distributed GPU network

Integrate via SDK & contracts. AIL2 handles routing, scheduling, receipts, settlement, and miner incentives—so you ship faster across Ethereum, BNB Chain, XLayer, Base, Mantle, and GIWA.

ETHBNBXLayerBaseMantleGIWA
100x Faster
AIL2

Router & Scheduler

Multi-objective routing (latency / reliability / cost / risk)

Receipts & Settlement

Auditable receipts + batch settlement + auto split

Cross-chain Native

One integration, serve users across chains

A protocol-grade core stack for decentralized AI.

You Bring

  • Model container (Docker)
  • Pricing Strategy
  • Revenue Split Policy

AIL2 Provides

  • Global GPU network
  • Intelligent Routing
  • Anti-fraud & Settlement

You Get

  • Infinite Scalability
  • Auditable Accounting
  • Instant Incentives

Core Protocol Capabilities

A modular stack designed for high-throughput AI inference.

Router & Scheduler

Predict queue time, score nodes, retry/fallback automatically across the entire decentralized mesh.

Model Containers

Versioning, canary rollout, rollback. Miners pull your custom images and run them in secure isolated environments.

Usage Receipts

Signed receipts with token counts, GPU seconds, and fees. Merkle batch proofs ensure absolute auditability.

AIL2 Settlement

Batch finalize inference proofs with a dispute window and automatic revenue distribution to token holders.

SLA & Reputation

Real-time monitoring of node success rates and p95 latency. Reputation scores affect routing priority.

Security & Anti-fraud

Risk-weighted rewards and automated slashing for forged receipts or duplicated work detection.

From Request to Settlement

The lifecycle of a decentralized inference request.

1

Request initiated via SDK/Gateway with project credentials.

2

Router validates authorization, rate-limits, and checks credit balance.

3

Scheduler selects optimal GPU node based on latency and reputation.

4

Container executes inference and returns signed result + receipt.

5

Receipt is batched into Merkle tree for Layer 2 settlement and split payment.

Everything is measurable, auditable, and programmable.

Ship in 5 Minutes

Start building with a single API call.

Create Project

Get your unique project_id in the console.

Generate API Key

Securely sign your inference requests.

Register Model

Define requirements, pricing, and revenue split.

bash (Inference Request)
curl -X POST "https://api.ail2.network/v1/infer" \
  -H "Authorization: Bearer $AIL2_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "project_id": "p_123",
    "model_id": "nebula-ai",
    "version": "1.2.3",
    "input": {"prompt": "Explain zk-proof in 3 bullets"},
    "options": {"timeout_ms": 12000}
  }'
json (Response with Receipt)
{
  "output": {"text": "..."},
  "receipt": {
    "receipt_id": "rcpt_9f3a...",
    "tokens": 1240,
    "gpu_sec": 0.84,
    "fee": "0.024",
    "node": "gpu-kr-0281"
  }
}

Core Capabilities Matrix

CapabilityWhat you getNotes
Inference APISync/Async endpoints for all modelsHTTP/WS Support
StreamingToken-by-token or event-based streamingp99 < 50ms TTFT
Model RegistryVersions, canary, and instant rollbacksIntegrated CI/CD
ReceiptsSigned Merkle-provable usage receiptsAudit-ready anytime
SettlementLayer 2 automated split executionSettles on Mainnet
PoliciesCustom rate-limits and region geofencingConfigurable per key

Frequently Asked Questions

Miners must provide enterprise-grade GPUs (Nvidia A100, H100, or RTX 4090) with high-bandwidth interconnects to meet the AIL2 Core performance SLA.

AIL2 supports TEE-based (Trusted Execution Environments) containers for models that require weights or inputs to remain encrypted throughout the inference cycle.

Yes. Developers can define custom pricing per token, per GPU second, or per inference task directly in the model registration contract.