Open API Integration
Last updated
Last updated
Follow these steps to integrate NeurochainAI’s AI inference into your existing platform using the Open API protocol:
Visit app.neurochain.ai.
Navigate to the Inference tab from the left-hand sidebar.
Choose the API you’d like to integrate with.
Click on Generate Key and copy the API key for use in your integration.
Do not share your API key with anyone, as it provides direct access to your AI models and infrastructure.
If you’re using an AI service like MindMac or any platform that supports Open API, go to the settings of that platform.
Locate the API Endpoint & Proxy section and click Add Network.
In the form that appears, select "Other" as the provider and input the following details:
API URL: https://ncmb.neurochain.io/v1/chat/completions
API Key: Paste the key you generated earlier.
Once the connection is established, you’ll need to choose an AI model from NeurochainAI’s network.
For a quick start, you can use Mistral-7B-Instruct-v0.2-GPTQ-Neurochain-custom-io. This model is optimized for diverse tasks like natural language processing, data analysis, and more.
After adding the model, save your changes and return to your platform’s dashboard.
Initiate a test query by selecting the NeurochainAI connection and one of the models you’ve added.
Run a few tests to ensure everything is working smoothly.
You can monitor your usage, track costs in NCN credits, and optimize your API calls directly from the NeurochainAI dashboard.