LogoLogo
  • Learn
    • Introduction
      • AegisAI: Modular AI-Specific Layer 1 Blockchain
      • The Need for Decentralized AI
      • Features of AegisAI
      • Typical Use Cases
      • Vision: Building a Fair and Open AI Ecosystem
    • Overview of AegisAI
      • Dual-layer Architecture
      • Lifecycle of an AI Task
      • PoS+W Inference Concensys
      • Reward Mechanism
      • Composability
      • Staking-equivalent License
  • How-to Guides
    • Run a Node
    • Develop a DeAI Application
      • Quick Start Guide
      • AegisAI for EVM Developers
      • AegisAI for Solana Developers
      • AegisAI RPC API
      • Build Your First AI dApp
  • Reference
    • AegisAI LLM Inference Specifications
  • Community
    • Twitter
    • Telegram
Powered by GitBook
On this page
  • Ensuring Reproducible AI Computations
  • Liquidation Mechanism and Security Model
  1. Learn
  2. Overview of AegisAI

PoS+W Inference Concensys

Unlike EVM/SVM verification or Bitcoin PoW, AI inference requires the network nodes to compute and verify large numbers of independent tasks, and verification also needs high costs. So AegisAI introduced the “PoS+W” Inference Concensys, which requires resource nodes not only earning by PoW but also providing staking collateral to be slashed. In this way, the cost of verification will not be high in most of the cases. When a resource node verifies and finds a malicious attack, the collateral of the malicious nodes will become the incentives for almost every resource node to verify it.

Ensuring Reproducible AI Computations

To implement this mechanism, we need to solve the following problems:

  • Any node can replicate and verify if the computation or verification of other nodes are correct

  • Nodes can not copy the computation or verification results of other nodes

AegisAI Specified LLM Inference (ASLI) is introduced to solve these problems. This reproducibility is crucial for maintaining trust and enabling decentralized validation. In ASLI, Resource Nodes compute TopK random numbers using a pseudo-random algorithm, with the seed tied to the node address and task hash, and precision standardized through rounding rules. Also a sample of middle-step params need to be submitted. The sample params are also based on node address and task hash. For the verification phase, a light-weight PoW task will be applied to make sure the resource node must calculate the AI models.

Liquidation Mechanism and Security Model

When a resource node submits an incorrect result, its staking collateral becomes part of the verification reward pool. As long as any single resource node submits an erroneous result, the total verification reward will grow to incentivize nearly all resource nodes in the network to participate in verification computation. Here are the details:

  • Liquidation Mechanism:

    • Nodes submitting incorrect results forfeit their staked collateral (approximately 10,000 USDT in $ASI tokens), which is redistributed to correct verifiers.

    • The most frequent result R (with n instances) is compared to others (with m instances). After 24 hours, if n > min{10m, 66% of active nodes}, R is deemed correct, and conflicting nodes are liquidated.

  • Enhanced Verification Incentives:

    • Incorrect submissions increase the verification reward pool (e.g., exceeding 10,000 USDT), encouraging widespread participation.

When the network reaches a certain scale, AegisAI’s PoS+W consensus mechanism exhibits high security. A successful attack requires the attacker to control more than 50% of the network’s staked assets and computational power at the same time, which is nearly impossible.

PreviousLifecycle of an AI TaskNextReward Mechanism

Last updated 2 months ago