
NeurochainAI Node Architecture
Three different types of nodes constitute the integration and operation of NeurochainAI's L1 and L3
The NeurochainAI network operates on a sophisticated, three-node architecture designed to deliver decentralized AI inference efficiently and securely. Each node type performs a specialized role, ensuring that user requests are validated, routed, and executed correctly.
The three types of nodes are:
Validator Nodes 🛡️
Gateway Nodes 🌐
Compute Nodes ⚙️
🛡️ Validator Nodes
Core Function: To validate AI request submissions and completions across the peer-to-peer network.
Validator Nodes form the core of the network's trust and verification layer. Operating under a Proof-of-Stake (PoS) model, they are responsible for scrutinizing the entire lifecycle of an AI inference request. Their performance is tracked by a rating system, ensuring only reliable and honest actors are entrusted with network validation.
Key Responsibilities:
Validate AI Requests: Act as P2P nodes that check and approve incoming AI inference requests from users.
Confirm Cost Trees: Verify the computational cost breakdown of a request, which is then sent to the user for final, signed approval before execution begins.
Provide Completion Proofs: After a Compute Node finishes a job, Validators generate and confirm validity proofs, which are essential for finalizing the transaction and updating the network state.
Who Should Run This Node?
Entities committed to the network's integrity and interested in earning rewards through staking.
Professional staking operators and long-term token holders.
🌐 Gateway Nodes
Core Function: To serve as the RPC entry point for all user interactions with the AI network.
Gateway Nodes are the network's front door. They provide the necessary API endpoints for users and applications to submit AI jobs. Their primary role is to act as a sophisticated switchboard, managing the flow of communication and proxying requests between users, Validators, and the appropriate Compute Nodes.
Key Responsibilities:
RPC Endpoint: Offer a stable and reliable RPC interface for users to interact with the network.
Request Proxying: Seamlessly handle the communication flow from a user's initial request to the Validators for verification.
Job Routing: Forward the validated request to the correct Compute Node on the appropriate subnet.
Result Forwarding: Relay the final results and completion proofs back to the user.
Who Should Run This Node?
Application developers and businesses building services on NeurochainAI.
Infrastructure providers who specialize in offering reliable API access points.
⚙️ Compute Nodes
Core Function: To execute AI inference requests on specialized subnets.
Compute Nodes are the execution layer of the network where the actual AI computation happens. These nodes are often equipped with powerful hardware (like GPUs) and are organized into subnets, each potentially specializing in different types of AI models or tasks. They receive validated jobs from Gateway Nodes and are responsible for processing them efficiently.
Key Responsibilities:
AI Inference: Perform the computational work required to fulfill an AI request.
Subnet Participation: Operate within specific subnets, processing jobs tailored to that subnet's capabilities.
Secure Execution: Run AI models in a secure environment to produce accurate results.
Return Results: Send the completed job's output back through the network for validation.
Who Should Run This Node?
Individuals or data centers with available computational resources (especially GPUs).
AI specialists who want to contribute to and earn from providing specific model inference capabilities.
Summary Comparison
Primary Role
Validate Requests & Completions
Serve as RPC Entry Point
Execute AI Inference
Key Task
Confirm cost trees & proofs
Proxy requests
Run AI models on subnets
Interacts With
Users, Compute Nodes
Users, Validators, Compute Nodes
Gateways, Validators
System
Proof-of-Stake with rating
RPC communication layer
Execution on subnets
Last updated