Project

Short description

A P2P network for AI inference with a free market for prices and per-request payments via X402 micropayments

Description

A P2P (peer-to-peer) network for AI inference creates a free and open market to solve these problems. It connects two types of participants:

  • GPU Providers: Anyone with an idle GPU can become a provider by running a node on the network. They register their available AI models and set their own prices, competing on cost, quality, and speed.
  • Consumers: Developers, end-users, or even other AI agents can access a wide range of models. Instead of a subscription or a fixed-price plan, they pay for each request, ensuring they only spend what they use.

How it’s made

Core Architecture

  1. Provider Nodes: These are the backbone of the network. Each node is a small service that hosts one or more AI models. It handles incoming requests, performs the inference, and manages the payment verification process.
  2. On-Chain Registry: A smart contract on a blockchain (like Ethereum or Base) acts as a public directory. Providers register their nodes, list their models, and share metadata such as pricing, latency, and reputation. This makes the network transparent and discoverable. See also project-architecture.
  3. Client SDK: A software development kit allows consumers to easily interact with the network. The SDK’s primary function is to query the on-chain registry, find the best-suited provider based on the user’s criteria (e.g., lowest price, fastest response), and automatically handle the payment and request process. Detailed in CLI PRS.

Economic Model and Payment Flow

The network operates on a per-request payment model, utilizing technologies like X402 for instant micropayments. This is how a transaction would work:

  1. A consumer uses the SDK to request an inference (e.g., “explain Web3 in simple terms”).
  2. The SDK consults the on-chain registry and identifies a provider offering the best terms.
  3. The SDK sends the request to the provider’s node, along with an on-chain payment.
  4. The provider’s node verifies the payment instantly.
  5. Once verified, the node performs the AI inference and returns the result.

This direct, trustless payment system eliminates intermediaries, reduces fees, and ensures providers are compensated in real-time.

The screen should show what it is, how to use it, and include a button that takes you to the documentation.

A sophisticated, high-quality landing page for a decentralized AI inference marketplace named “Infernet”. The design should be sleek and futuristic, using a dark theme with vibrant, tech-inspired neon blue and purple accents. The layout should be clean, spacious, and modern, with subtle animations.

Header:

  • A simple, elegant navigation bar with the logo “Infernet” on the left.
  • Navigation links: “How It Works”, “For Developers”, “Features”, “Get Started”.

Hero Section:

  • A bold, compelling headline: “Unlock a New AI Economy. A Global Compute Layer for the Onchain World.”
  • A concise sub-headline: “Infernet is a permissionless P2P network connecting idle GPU power with AI agents and developers, enabling dynamic pricing and trustless, per-request micropayments with X402.”
  • A prominent call-to-action button: “Launch App”
  • A large, animated hero illustration depicting decentralized servers or GPU nodes interconnected with blockchain blocks and data streams, visually representing a global compute network.

Problem & Opportunity Section:

  • A two-column grid.
  • Left column: Title “The Centralized Bottleneck”
    • Bullet points with icons: “Fixed pricing and high costs”, “Censorship & geo-restrictions”, “Wasted GPU resources globally”, “Limited model access.”
  • Right column: Title “The Infernet Advantage”
    • Bullet points with icons: “True market competition”, “Censorship-resistant inference”, “Monetize your idle hardware”, “Access any model, anywhere.”

How It Works Section (Flow):

  • A numbered, step-by-step diagram or animated visual of the core workflow.
  • Step 1: Provider Onboarding: A user registers their GPU node on the network using a simple CLI command.
  • Step 2: On-Chain Discovery: The node’s availability, models, and pricing are registered on our smart contract (on Base), making them discoverable.
  • Step 3: Client Request: A developer or agent uses our SDK to find the best provider based on their criteria.
  • Step 4: Atomic Micropayment: A per-request payment is sent instantly via X402 from the client to the provider.
  • Step 5: Trustless Execution: The provider performs the AI inference and returns the result, all verified by our protocol.

For Developers Section:

  • A title: “Build on the Infernet Protocol”

  • A sub-headline: “Seamless integration for dApps and AI agents.”

  • A clean code block showing a full-featured example of the SDK:

    import { InfernetClient } from 'infernet-sdk';
     
    const client = new InfernetClient({ network: 'base' });
     
    // Query and find the best provider
    const response = await client.inference({
      model: 'llama-3.3',
      prompt: 'Describe Web3 in simple terms.',
      maxPrice: 0.0002
    });
     
    console.log(response.data);