Use Cases
NCN Network v2 enables various decentralized AI applications. This page covers common use cases and implementation patterns.
AI Inference as a Service
Overview
Deploy AI models and offer inference as a pay-per-use service.
┌──────────┐ ┌──────────────┐ ┌───────────────┐
│ Client │────▶│ Gateway │────▶│ Compute Node │
│ (App) │ │ (Your Org) │ │ (GPU Farm) │
└──────────┘ └──────────────┘ └───────────────┘
│ │
└───────────── Payment ─────────────────┘Benefits
No infrastructure management: Use existing compute providers
Pay only for usage: No idle GPU costs
Scalable: Add more compute nodes as demand grows
Transparent pricing: On-chain payment records
Implementation
Gateway Operator: Create a subnet with your models
Compute Providers: Join your subnet, provide GPU resources
Clients: Send inference requests, pay per request
Example: Image Classification Service
Decentralized Model Hosting
Overview
Host AI models in a decentralized network where multiple compute providers can serve requests.
Architecture
Benefits
Redundancy: Multiple providers serve requests
Geographic distribution: Low latency worldwide
Censorship resistance: No single point of control
Competition: Providers compete on price and quality
Setup
Create Subnet:
Providers Join:
Clients Connect:
Text-to-Audio Pipeline (Bark)
Overview
NCN Network supports the Bark text-to-audio model with distributed pipeline execution.
Pipeline Stages
Implementation
The Bark pipeline uses three specialized models:
Semantic
bark_semantic_model.pt
Text to semantic tokens
Coarse
bark_coarse_model.pt
Semantic to coarse acoustic tokens
Fine
bark_fine_model.pt
Coarse to fine acoustic tokens
Subnet Configuration
Client Usage
Custom Model Deployment
Overview
Deploy your own trained models on the NCN Network.
Requirements
Model Format: TorchScript (
.pt), ONNX (.onnx), or SafetensorsExecutor Script: Python script for inference
Input/Output Schema: JSON format definition
Step-by-Step
1. Export Your Model
2. Create Executor Script
3. Configure Subnet
4. Deploy
Batch Processing
Overview
Process large batches of inference requests efficiently.
Architecture
Implementation
Real-Time Applications
Overview
Build real-time AI applications with WebSocket streaming.
WebSocket Connection
Use Cases
Live transcription
Real-time translation
Interactive chatbots
Voice assistants
Next Steps
Getting Started - Set up your environment
Key Concepts - Understand the system
Client Integration - Build your client
Last updated
