AegisAIEndpoint provides a universal AI inference request interface, allowing developers to send AI requests and receive response results in smart contracts. This interface can be easily extended to include various AI application scenarios, from text analysis, image recognition to more complex AI inference tasks.
Standard Overview
AegisAIEndpoint provides AegisAIEndpoint.sol as the foundation interface for implementing universal AI inference:
/// @notice Defines parameters needed for sending AI processing requests
struct RequestParams {
string prompts; // Input prompts or queries for AI processing
bytes schema; // Schema identifier defining request format
bytes modelOptions; // Additional AI model configuration options
uint64 targetCount; // Required number of AI nodes for execution
address refundAddress; // Address to receive refunds
bytes options; // Reserved extension fields, such as additional AI processing parameters or gas limits
}
/// @notice Defines fee structure related to AI requests
struct RequestFee {
address token; // Token address used for payment
uint256 totalFee; // Total fee amount required for processing
}
/// @notice Defines data packet structure for AI responses
struct ResponsePacket {
bytes32 requestHash; // Request hash identifier
bytes payload; // Response data payload
uint64 status; // Response status code, 0: request failed, payload unavailable.
// 1: request successful, payload valid.
uint64 confirmations; // Number of confirmations
}
Detailed Explanation
Get Request Fee
/**
* @notice Calculates fees required for processing AI requests
* @param params Contains all user-specified AI task parameters
* @return Estimated processing fee (RequestFee)
*/
function quoteRequestFee(
RequestParams calldata params
) external view returns (RequestFee memory);
Submit Request
/**
* @notice Submit an AI processing request
* @param requestParams Structured AI request parameters
* @return requestHash Unique identifier for the submitted request
*/
function submitRequest(
RequestParams calldata requestParams
) external payable returns (bytes32 requestHash);
Callback Interface
Any application contract that needs to interface with AegisAIEndpoint must implement the following interface:
interface IRequestCallbackHandler {
/**
* @notice Process response results from AegisAIEndpoint
* @param resp Response data packet
*/
function process(ResponsePacket memory resp) external payable;
}
Installation
//Coming soon...
Creating AegisAI Application Contracts
Each AegisAI application needs to set a parameter in its constructor:
Endpoint address: The address of the endpoint contract used for communication with the protocol
And implement sending and receiving functions:
submitRequest: Applications must call this function to send AI requests
process: This function is called when the Endpoint receives an AI response
Sample implementation:
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.24;
import {IAegisAIEndpoint, RequestParams, ResponsePacket} from "@aegisai/contracts/interfaces/IAegisAIEndpoint.sol";
import {IRequestCallbackHandler} from "@aegisai/contracts/interfaces/IRequestCallbackHandler.sol";
contract MyAIApp is IRequestCallbackHandler {
address public endpoint;
constructor(address _endpoint) {
endpoint = _endpoint;
}
modifier onlyEndpoint() {
require(msg.sender == endpoint);
_;
}
// Send AI inference request
function requestAI(
string calldata prompt,
bytes calldata schema,
bytes calldata modelOptions
) external payable returns (bytes32) {
return IAegisAIEndpoint(endpoint).submitRequest{value: msg.value}(
RequestParams({
prompts: prompt,
schema: schema,
modelOptions: modelOptions,
targetCount: 1,
refundAddress: msg.sender,
options: ""
})
);
}
// Receive AI inference result
function process(ResponsePacket memory response) external onlyEndpoint() {
// Process AI response results
// ...
}
}
Deployment Process
Deploy the application contract, setting the correct Endpoint address:
MyAIApp app = new MyAIApp(endpointAddress);
Send requests and wait for responses:
bytes32 requestId = app.requestAI{value: 0.1 ether}(
"Analyze the sentiment of this text", // prompt
"TEXT_ANALYSIS", // schema
"GPT4" // model
);
Request Fees
Each AI request requires a processing fee. Fees are calculated based on the following factors:
Number of target nodes
AI model type
Response time requirements
Gas consumption
You can estimate fees using the quoteRequestFee function:
It's recommended to call quoteRequestFee before sending a request to get an accurate fee estimate and avoid transaction failures due to insufficient fees.
Response Processing
AegisAI uses a Merkle proof system to verify the authenticity of AI responses. The verification process is as follows:
AI nodes generate responses
Responses are packed into a Merkle tree
The Merkle root is submitted to the AegisAIEndpoint contract
Responses are verified through Merkle proofs
function process(ResponsePacket memory response) external {
// 1. Verify the response
require(response.status == 1, "Response failed");
// 2. Process the result
bytes memory result = response.payload;
// 3. Execute business logic
// ...
}
Ensuring Secure Callback Handling
To ensure only AegisAIEndpoint can call callback functions, implement a modifier: