LogoLogo
  • Learn
    • Introduction
      • AegisAI: Modular AI-Specific Layer 1 Blockchain
      • The Need for Decentralized AI
      • Features of AegisAI
      • Typical Use Cases
      • Vision: Building a Fair and Open AI Ecosystem
    • Overview of AegisAI
      • Dual-layer Architecture
      • Lifecycle of an AI Task
      • PoS+W Inference Concensys
      • Reward Mechanism
      • Composability
      • Staking-equivalent License
  • How-to Guides
    • Run a Node
    • Develop a DeAI Application
      • Quick Start Guide
      • AegisAI for EVM Developers
      • AegisAI for Solana Developers
      • AegisAI RPC API
      • Build Your First AI dApp
  • Reference
    • AegisAI LLM Inference Specifications
  • Community
    • Twitter
    • Telegram
Powered by GitBook
On this page
  • Standard Overview
  • Detailed Explanation
  • Installation
  • Creating AegisAI Application Contracts
  • Deployment Process
  • Request Fees
  • Response Processing
  • Ensuring Secure Callback Handling
  1. How-to Guides
  2. Develop a DeAI Application

AegisAI for EVM Developers

AegisAIEndpoint provides a universal AI inference request interface, allowing developers to send AI requests and receive response results in smart contracts. This interface can be easily extended to include various AI application scenarios, from text analysis, image recognition to more complex AI inference tasks.

Standard Overview

AegisAIEndpoint provides AegisAIEndpoint.sol as the foundation interface for implementing universal AI inference:

Request Interface Definition

interface IAegisAIEndpoint {
    function quoteRequestFee(RequestParams calldata params) external view returns (RequestFee memory);
    function submitRequest(RequestParams calldata requestParams) external payable returns (bytes32 requestHash);
}

Core Data Structures

/// @notice Defines parameters needed for sending AI processing requests
struct RequestParams {
    string prompts;        // Input prompts or queries for AI processing
    bytes schema;          // Schema identifier defining request format
    bytes modelOptions;    // Additional AI model configuration options
    uint64 targetCount;    // Required number of AI nodes for execution
    address refundAddress; // Address to receive refunds
    bytes options;         // Reserved extension fields, such as additional AI processing parameters or gas limits
}

/// @notice Defines fee structure related to AI requests
struct RequestFee {
    address token;         // Token address used for payment
    uint256 totalFee;      // Total fee amount required for processing
}

/// @notice Defines data packet structure for AI responses
struct ResponsePacket {
    bytes32 requestHash;   // Request hash identifier
    bytes payload;         // Response data payload
    uint64 status;         // Response status code, 0: request failed, payload unavailable.
                           // 1: request successful, payload valid.
    uint64 confirmations;  // Number of confirmations
}

Detailed Explanation

Get Request Fee

/**
 * @notice Calculates fees required for processing AI requests
 * @param params Contains all user-specified AI task parameters
 * @return Estimated processing fee (RequestFee)
 */
function quoteRequestFee(
    RequestParams calldata params
) external view returns (RequestFee memory);

Submit Request

/**
 * @notice Submit an AI processing request
 * @param requestParams Structured AI request parameters
 * @return requestHash Unique identifier for the submitted request
 */
function submitRequest(
    RequestParams calldata requestParams
) external payable returns (bytes32 requestHash);

Callback Interface

Any application contract that needs to interface with AegisAIEndpoint must implement the following interface:

interface IRequestCallbackHandler {
    /**
     * @notice Process response results from AegisAIEndpoint
     * @param resp Response data packet
     */
    function process(ResponsePacket memory resp) external payable;
}

Installation

//Coming soon...

Creating AegisAI Application Contracts

Each AegisAI application needs to set a parameter in its constructor:

  • Endpoint address: The address of the endpoint contract used for communication with the protocol

And implement sending and receiving functions:

  • submitRequest: Applications must call this function to send AI requests

  • process: This function is called when the Endpoint receives an AI response

Sample implementation:

// SPDX-License-Identifier: MIT
pragma solidity ^0.8.24;

import {IAegisAIEndpoint, RequestParams, ResponsePacket} from "@aegisai/contracts/interfaces/IAegisAIEndpoint.sol";
import {IRequestCallbackHandler} from "@aegisai/contracts/interfaces/IRequestCallbackHandler.sol";

contract MyAIApp is IRequestCallbackHandler {
    address public endpoint;
    
    constructor(address _endpoint) {
        endpoint = _endpoint;
    }
    modifier onlyEndpoint() {
        require(msg.sender == endpoint);
        _;
    }
    
    // Send AI inference request
    function requestAI(
        string calldata prompt,
        bytes calldata schema,
        bytes calldata modelOptions
    ) external payable returns (bytes32) {
        return IAegisAIEndpoint(endpoint).submitRequest{value: msg.value}(
            RequestParams({
                prompts: prompt,
                schema: schema,
                modelOptions: modelOptions,
                targetCount: 1,
                refundAddress: msg.sender,
                options: ""
            })
        );
    }
    
    // Receive AI inference result
    function process(ResponsePacket memory response) external onlyEndpoint() {
        // Process AI response results
        // ...
    }
}

Deployment Process

  1. Deploy the application contract, setting the correct Endpoint address:

MyAIApp app = new MyAIApp(endpointAddress);
  1. Send requests and wait for responses:

bytes32 requestId = app.requestAI{value: 0.1 ether}(
    "Analyze the sentiment of this text",  // prompt
    "TEXT_ANALYSIS",                       // schema  
    "GPT4"                                 // model
);

Request Fees

Each AI request requires a processing fee. Fees are calculated based on the following factors:

  1. Number of target nodes

  2. AI model type

  3. Response time requirements

  4. Gas consumption

You can estimate fees using the quoteRequestFee function:

RequestFee memory fee = endpoint.quoteRequestFee(params);

It's recommended to call quoteRequestFee before sending a request to get an accurate fee estimate and avoid transaction failures due to insufficient fees.

Response Processing

AegisAI uses a Merkle proof system to verify the authenticity of AI responses. The verification process is as follows:

  1. AI nodes generate responses

  2. Responses are packed into a Merkle tree

  3. The Merkle root is submitted to the AegisAIEndpoint contract

  4. Responses are verified through Merkle proofs

function process(ResponsePacket memory response) external {
    // 1. Verify the response
    require(response.status == 1, "Response failed");
    
    // 2. Process the result
    bytes memory result = response.payload;
    
    // 3. Execute business logic
    // ...
}

Ensuring Secure Callback Handling

To ensure only AegisAIEndpoint can call callback functions, implement a modifier:

modifier onlyEndpoint() {
    if (msg.sender != endpoint) revert EndpointRequired();
    _;
}

PreviousQuick Start GuideNextAegisAI for Solana Developers

Last updated 2 months ago

Now, you can build your own AI applications using AegisAIEndpoint. You can go to the section to see an example.

Build Your First AI dApp