LogoLogo
NeurochainAI Guides
NeurochainAI Guides
  • Quick Start Guide
  • NeurochainAI Official Links
  • Getting started
    • Understanding the Dashboard
    • Specific FAQs
  • BUILD ON DIN
    • What is Distributed Inference Network (DIN)
      • Why Build on DIN?
      • Introduction to Inference
    • AI Models Available on DIN
    • Adding Credits
    • Generating an API Key for Inference
    • Use Sentiment Analysis
    • Pricing
    • INTEGRATIONS
      • Make (Integromat) - Use IaaS in Your Scenarios
      • N8N - Use IaaS in Your Workflow
      • Typebot - Use IaaS in Your Chatbot
      • Botghost - Creating AI Discord Bots
      • Replit - Building AI-Powered Chatbot
        • Build Custom Solutions with Flux.1 Schnell
      • Pipedream - N8N - Use IaaS in Your Automation
      • Voiceflow - Use IaaS to Enhance Your Chatbot
      • Open API Integration
      • BuildShip - Use IaaS to Automate Workflows
      • Pipefy - Optimizing Business Processes
  • No-code workshops
  • NeurochainAI No-Code: AI Automation with N8N
  • NeurochainAI No-Code: Development Guide (Bolt.new)
  • NeurochainAI No-Code: Build AI-Powered Apps with Cursor
  • NeurochainAI No-Code: Intelligent Text Parsing
  • CONNECT GPUs
    • Connect GPUs: All You Need to Know
    • GPU Setup Instructions
    • Running the Worker
    • Mobile App
  • ENTERPRISE SOLUTIONS
    • Inference Routing Solution
    • Managed Inference Infrastructure
    • AI Model Quantization
    • Data Layer
  • NCN Chain
    • NCN Scan
    • Setting Up Wallet
      • Manual Addition (MetaMask)
    • Trading $NCN on Uniswap
    • Neuron Validator Nodes
      • How to stake
      • Hardware Requirements
      • Running a Neuron Node
  • Community
    • NeurochainAI Loyalty Program
    • All the Ways to Get Involved
Powered by GitBook
On this page
  1. BUILD ON DIN

What is Distributed Inference Network (DIN)

The Distributed Inference Network (DIN) by NeurochainAI is a distributed, scalable infrastructure designed to support AI inference tasks. DIN leverages a global network of community-powered GPUs and NPUs, allowing users to access high-performance computing resources on-demand, without relying solely on traditional, centralized cloud providers. This approach not only reduces costs but also provides a more resilient, flexible solution for AI deployment.


Key Features of DIN

  1. Decentralized Compute Power

    • DIN taps into a network of distributed GPUs, NPUs, and other computing devices, from high-end servers to consumer-grade hardware like gaming consoles and smartphones. This decentralized model allows for increased availability and eliminates dependency on a single provider.

  2. Scalability on Demand

    • Unlike traditional cloud platforms, DIN’s distributed nature offers infinite scalability. Each inference task is assigned to the most optimal GPU in the network, allowing the infrastructure to handle large volumes of requests without bottlenecks. As more devices join the network, DIN automatically scales to meet demand.

  3. Cost Efficiency

    • By distributing AI inference tasks across a global network, DIN significantly reduces costs. Companies pay only per inference instance, as the system assigns each task to the most cost-effective and available GPU, making AI deployment more affordable for businesses of all sizes.

  4. Flexible Use Cases

    • DIN is ideal for non-sensitive AI applications such as customer support, recommendation engines, and other inference-based solutions that don’t require strict data privacy measures. This flexibility makes DIN suitable for a wide range of industries and use cases.


How DIN Works

  1. Task Assignment When an inference request is made, DIN’s smart routing system assigns the task to the most optimal GPU or NPU in the network based on availability, performance, and cost factors.

  2. Load Balancing DIN dynamically balances workloads across the network, ensuring that no single device is overloaded. This results in faster response times and greater reliability.


Benefits of Using DIN

  • Cost Savings: DIN reduces AI inference costs by leveraging decentralized resources and assigning tasks to the most efficient devices.

  • Infinite Scalability: DIN’s distributed network grows with each added device, enabling nearly unlimited scalability for AI tasks.

  • Cross-Platform Compatibility: Supports various hardware types, from enterprise-grade GPUs to consumer devices, offering flexibility and accessibility.

  • Faster Deployment: Distributed compute and model quantization allow for rapid inference and shorter load times, ensuring models are ready to use in real-time applications.


Use Cases for DIN

  • Customer Support Bots: Efficiently run AI chat models to enhance customer support experiences without high infrastructure costs.

  • Recommendation Engines: Leverage distributed inference for real-time product or content recommendations across e-commerce and media platforms.

  • Sentiment Analysis: Deploy models like SentimentAI for real-time analysis of customer feedback and social media data.

  • Image and Text Processing: Run models like Flux.1 optimized with GGUF to handle high-performance tasks, such as image recognition and text generation.


Getting Started with DIN

  1. Add Credits: Use NCN credits to pay for inference tasks. Credits can be purchased through Stripe or by depositing $NCN.

  2. Generate API Key: Access the API Key from your dashboard to integrate your applications with DIN.

  3. Deploy Your Model: Select your preferred AI model, adjust settings if needed, and start using the DIN to run inference tasks.

PreviousSpecific FAQsNextWhy Build on DIN?

Last updated 6 months ago

Create an Account: Sign up on to access the Distributed Inference Network.

app.neurochain.ai