LogoLogo
NeurochainAI Guides
NeurochainAI Guides
  • Quick Start Guide
  • NeurochainAI Official Links
  • Getting started
    • Understanding the Dashboard
    • Specific FAQs
  • BUILD ON DIN
    • What is Distributed Inference Network (DIN)
      • Why Build on DIN?
      • Introduction to Inference
    • AI Models Available on DIN
    • Adding Credits
    • Generating an API Key for Inference
    • Use Sentiment Analysis
    • Pricing
    • INTEGRATIONS
      • Make (Integromat) - Use IaaS in Your Scenarios
      • N8N - Use IaaS in Your Workflow
      • Typebot - Use IaaS in Your Chatbot
      • Botghost - Creating AI Discord Bots
      • Replit - Building AI-Powered Chatbot
        • Build Custom Solutions with Flux.1 Schnell
      • Pipedream - N8N - Use IaaS in Your Automation
      • Voiceflow - Use IaaS to Enhance Your Chatbot
      • Open API Integration
      • BuildShip - Use IaaS to Automate Workflows
      • Pipefy - Optimizing Business Processes
  • No-code workshops
  • NeurochainAI No-Code: AI Automation with N8N
  • NeurochainAI No-Code: Development Guide (Bolt.new)
  • NeurochainAI No-Code: Build AI-Powered Apps with Cursor
  • NeurochainAI No-Code: Intelligent Text Parsing
  • CONNECT GPUs
    • Connect GPUs: All You Need to Know
    • GPU Setup Instructions
    • Running the Worker
    • Mobile App
  • ENTERPRISE SOLUTIONS
    • Inference Routing Solution
    • Managed Inference Infrastructure
    • AI Model Quantization
    • Data Layer
  • NCN Chain
    • NCN Scan
    • Setting Up Wallet
      • Manual Addition (MetaMask)
    • Trading $NCN on Uniswap
    • Neuron Validator Nodes
      • How to stake
      • Hardware Requirements
      • Running a Neuron Node
  • Community
    • NeurochainAI Loyalty Program
    • All the Ways to Get Involved
Powered by GitBook
On this page
  1. BUILD ON DIN
  2. INTEGRATIONS

Open API Integration

PreviousVoiceflow - Use IaaS to Enhance Your ChatbotNextBuildShip - Use IaaS to Automate Workflows

Last updated 3 months ago

Step-by-Step Guide for Open API Integration

Follow these steps to integrate NeurochainAI’s AI inference into your existing platform using the Open API protocol:

Step 1: Access the NeurochainAI App

Step 2: Obtain Your API Key

  1. Choose the API you’d like to integrate with.

  2. Click on Generate Key and copy the API key for use in your integration.

Do not share your API key with anyone, as it provides direct access to your AI models and infrastructure.

Step 3: Configure Your AI Integration

  1. If you’re using an AI service like MindMac or any platform that supports Open API, go to the settings of that platform.

  2. Locate the API Endpoint & Proxy section and click Add Network.

  3. In the form that appears, select "Other" as the provider and input the following details:

    • API URL: https://ncmb.neurochain.io/v1/chat/completions

    • API Key: Paste the key you generated earlier.

Step 4: Select a Model

  1. Once the connection is established, you’ll need to choose an AI model from NeurochainAI’s network.

  2. For a quick start, you can use Mistral-7B-Instruct-v0.2-GPTQ-Neurochain-custom-io. This model is optimized for diverse tasks like natural language processing, data analysis, and more.

Step 5: Finalize and Test the Integration

  1. After adding the model, save your changes and return to your platform’s dashboard.

  2. Initiate a test query by selecting the NeurochainAI connection and one of the models you’ve added.

  3. Run a few tests to ensure everything is working smoothly.

Step 6: Monitor Usage and Costs

  1. You can monitor your usage, track costs in NCN credits, and optimize your API calls directly from the NeurochainAI dashboard.

Visit .

Navigate to the from the left-hand sidebar.

app.neurochain.ai
Inference tab