Use Cases
AI Inference as a Service
Overview
┌──────────┐ ┌──────────────┐ ┌───────────────┐
│ Client │────▶│ Gateway │────▶│ Compute Node │
│ (App) │ │ (Your Org) │ │ (GPU Farm) │
└──────────┘ └──────────────┘ └───────────────┘
│ │
└───────────── Payment ─────────────────┘Benefits
Implementation
Example: Image Classification Service
Decentralized Model Hosting
Overview
Architecture
Benefits
Setup
Text-to-Audio Pipeline (Bark)
Overview
Pipeline Stages
Implementation
Stage
Model
Purpose
Subnet Configuration
Client Usage
Custom Model Deployment
Overview
Requirements
Step-by-Step
1. Export Your Model
2. Create Executor Script
3. Configure Subnet
4. Deploy
Batch Processing
Overview
Architecture
Implementation
Real-Time Applications
Overview
WebSocket Connection
Use Cases
Next Steps
Last updated
