# Use Cases

NCN Network v2 enables various decentralized AI applications. This page covers common use cases and implementation patterns.

***

## AI Inference as a Service

### Overview

Deploy AI models and offer inference as a pay-per-use service.

```
┌──────────┐     ┌──────────────┐     ┌───────────────┐
│  Client  │────▶│   Gateway    │────▶│ Compute Node  │
│  (App)   │     │  (Your Org)  │     │  (GPU Farm)   │
└──────────┘     └──────────────┘     └───────────────┘
      │                                       │
      └───────────── Payment ─────────────────┘
```

### Benefits

* **No infrastructure management**: Use existing compute providers
* **Pay only for usage**: No idle GPU costs
* **Scalable**: Add more compute nodes as demand grows
* **Transparent pricing**: On-chain payment records

### Implementation

1. **Gateway Operator**: Create a subnet with your models
2. **Compute Providers**: Join your subnet, provide GPU resources
3. **Clients**: Send inference requests, pay per request

### Example: Image Classification Service

```python
# Client code
import requests

response = requests.post(
    "https://your-gateway.com/api/v1/inference",
    json={
        "model_uuid": "resnet50",
        "input_data": {"image_url": "https://example.com/image.jpg"}
    },
    headers={"Authorization": "Bearer YOUR_TOKEN"}
)

result = response.json()
print(f"Classification: {result['output_data']}")
```

***

## Decentralized Model Hosting

### Overview

Host AI models in a decentralized network where multiple compute providers can serve requests.

### Architecture

```
                    ┌────────────────┐
                    │    Subnet      │
                    │  (Model Set)   │
                    └───────┬────────┘
                            │
        ┌───────────────────┼───────────────────┐
        │                   │                   │
┌───────▼───────┐   ┌───────▼───────┐   ┌───────▼───────┐
│  Compute 1    │   │  Compute 2    │   │  Compute 3    │
│  (Provider A) │   │  (Provider B) │   │  (Provider C) │
└───────────────┘   └───────────────┘   └───────────────┘
```

### Benefits

* **Redundancy**: Multiple providers serve requests
* **Geographic distribution**: Low latency worldwide
* **Censorship resistance**: No single point of control
* **Competition**: Providers compete on price and quality

### Setup

1. **Create Subnet**:

   ```bash
   subnet-cli create -c model_config.json
   ```
2. **Providers Join**:

   ```bash
   compute_node --subnet-id 1 --sync-models
   ```
3. **Clients Connect**:

   ```bash
   # Requests automatically routed to available providers
   ```

***

## Text-to-Audio Pipeline (Bark)

### Overview

NCN Network supports the Bark text-to-audio model with distributed pipeline execution.

### Pipeline Stages

```
Input Text → Semantic → Coarse → Fine → Audio Output
              Stage      Stage    Stage
```

### Implementation

The Bark pipeline uses three specialized models:

| Stage    | Model                    | Purpose                            |
| -------- | ------------------------ | ---------------------------------- |
| Semantic | `bark_semantic_model.pt` | Text to semantic tokens            |
| Coarse   | `bark_coarse_model.pt`   | Semantic to coarse acoustic tokens |
| Fine     | `bark_fine_model.pt`     | Coarse to fine acoustic tokens     |

### Subnet Configuration

```json
{
  "gateway_address": "0xYourGateway",
  "models": [
    {
      "name": "bark_semantic",
      "download_url": "https://huggingface.co/suno/bark/resolve/main/...",
      "executor_script": "semantic_model_executor.py"
    },
    {
      "name": "bark_coarse",
      "download_url": "https://huggingface.co/suno/bark/resolve/main/...",
      "executor_script": "coarse_model_executor.py"
    },
    {
      "name": "bark_fine",
      "download_url": "https://huggingface.co/suno/bark/resolve/main/...",
      "executor_script": "fine_model_executor.py"
    }
  ]
}
```

### Client Usage

```python
# Run Bark pipeline
text = "Hello, this is a test of the NCN Network text-to-speech system."

# Stage 1: Semantic
semantic_tokens = client.inference("bark_semantic", {"text": text})

# Stage 2: Coarse
coarse_tokens = client.inference("bark_coarse", {"semantic_tokens": semantic_tokens})

# Stage 3: Fine
audio_data = client.inference("bark_fine", {"coarse_tokens": coarse_tokens})

# Save audio
with open("output.wav", "wb") as f:
    f.write(audio_data)
```

***

## Custom Model Deployment

### Overview

Deploy your own trained models on the NCN Network.

### Requirements

1. **Model Format**: TorchScript (`.pt`), ONNX (`.onnx`), or Safetensors
2. **Executor Script**: Python script for inference
3. **Input/Output Schema**: JSON format definition

### Step-by-Step

#### 1. Export Your Model

```python
import torch

# Your trained model
model = MyModel()
model.load_state_dict(torch.load("model_weights.pth"))
model.eval()

# Export to TorchScript
scripted = torch.jit.script(model)
scripted.save("my_model.pt")
```

#### 2. Create Executor Script

```python
# my_model_executor.py
import os
import json
import torch

def main():
    # Load input
    input_file = os.environ.get("INPUT_FILE")
    with open(input_file) as f:
        input_data = json.load(f)
    
    # Load model
    model = torch.jit.load("my_model.pt")
    
    # Run inference
    input_tensor = torch.tensor(input_data["input"])
    output = model(input_tensor)
    
    # Write output
    output_file = os.environ.get("OUTPUT_FILE")
    with open(output_file, "w") as f:
        json.dump({"output": output.tolist()}, f)

if __name__ == "__main__":
    main()
```

#### 3. Configure Subnet

```json
{
  "models": [
    {
      "name": "my_model",
      "download_url": "https://your-storage.com/my_model.pt",
      "executor_script": "base64_encoded_executor_script",
      "file_size_bytes": 500000000
    }
  ]
}
```

#### 4. Deploy

```bash
subnet-cli create -c my_model_config.json
```

***

## Batch Processing

### Overview

Process large batches of inference requests efficiently.

### Architecture

```
┌─────────────────────────────────────────────────────────┐
│                    Batch Job                             │
│  ┌─────────┐  ┌─────────┐  ┌─────────┐  ┌─────────┐    │
│  │ Request │  │ Request │  │ Request │  │ Request │ ...│
│  │    1    │  │    2    │  │    3    │  │    N    │    │
│  └────┬────┘  └────┬────┘  └────┬────┘  └────┬────┘    │
└───────┼────────────┼────────────┼────────────┼──────────┘
        │            │            │            │
        ▼            ▼            ▼            ▼
    ┌───────────────────────────────────────────────┐
    │              Load Balancer                     │
    │         (Gateway Distribution)                 │
    └───────────────────────────────────────────────┘
        │            │            │            │
        ▼            ▼            ▼            ▼
    ┌────────┐  ┌────────┐  ┌────────┐  ┌────────┐
    │Compute1│  │Compute2│  │Compute3│  │Compute4│
    └────────┘  └────────┘  └────────┘  └────────┘
```

### Implementation

```python
import asyncio
from ncn_client import NCNClient

async def process_batch(items):
    client = NCNClient("https://gateway.example.com")
    
    # Submit all requests concurrently
    tasks = [
        client.inference_async("model_name", item)
        for item in items
    ]
    
    # Wait for all results
    results = await asyncio.gather(*tasks)
    return results

# Process 1000 items
batch = load_batch_data()
results = asyncio.run(process_batch(batch))
```

***

## Real-Time Applications

### Overview

Build real-time AI applications with WebSocket streaming.

### WebSocket Connection

```javascript
const ws = new WebSocket("wss://gateway.example.com/ws");

ws.onopen = () => {
    // Subscribe to task updates
    ws.send(JSON.stringify({
        type: "subscribe",
        request_id: "my-request-123"
    }));
};

ws.onmessage = (event) => {
    const data = JSON.parse(event.data);
    
    if (data.type === "task_status") {
        console.log(`Status: ${data.status}`);
    } else if (data.type === "task_result") {
        console.log(`Result: ${data.output_data}`);
    }
};
```

### Use Cases

* Live transcription
* Real-time translation
* Interactive chatbots
* Voice assistants

***

## Next Steps

* [Getting Started](https://docs.neurochain.ai/nc/neurochainai-guides/introduction/getting-started) - Set up your environment
* [Key Concepts](https://docs.neurochain.ai/nc/neurochainai-guides/introduction/key-concepts) - Understand the system
* [Client Integration](https://docs.neurochain.ai/nc/neurochainai-guides/clients/clients) - Build your client
